query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
22
negative_passages
listlengths
9
100
subset
stringclasses
7 values
24fc0959f0cf5649e13c6338f8a89b91
Measuring Latency in Virtual Environments
[ { "docid": "a81c87374e7ea9a3066f643ac89bfd2b", "text": "Image edge detection is a process of locating the e dg of an image which is important in finding the approximate absolute gradient magnitude at each point I of an input grayscale image. The problem of getting an appropriate absolute gradient magnitude for edges lies in the method used. The Sobel operator performs a 2-D spatial gradient measurement on images. Transferri ng a 2-D pixel array into statistically uncorrelated data se t enhances the removal of redundant data, as a result, reduction of the amount of data is required to represent a digital image. The Sobel edge detector uses a pair of 3 x 3 convolution masks, one estimating gradient in the x-direction and the other estimating gradient in y–direction. The Sobel detector is incredibly sensit ive o noise in pictures, it effectively highlight them as edges. Henc e, Sobel operator is recommended in massive data communication found in data transfer.", "title": "" } ]
[ { "docid": "e646f3cd80aecac679558148bff3b1e5", "text": "Analog front-end (AFE) circuits, which mainly consist of a transimpedance amplifier (TIA) with wide dynamic range and a timing discriminator with double threshold voltage, were designed and implemented for a pulsed time-of-fight 4-D imaging LADAR receiver. The preamplifier of the proposed TIA adopts a shunt-feedback topology to amplify weak echo signal, and a current-mirror topology to amplify strong one, respectively. The proposed AFE can capture directly the pulsed echo amplitude with wide dynamic range through programmable gain control switches. The proposed AFE circuits, which achieve a high gain of 106 dB<inline-formula> <tex-math notation=\"LaTeX\">$\\Omega $ </tex-math></inline-formula>, a linear dynamic range of 80 dB, an averaged input-referred noise density of 0.89 pA/Hz<sup>0.5</sup> and a minimum detectable signal of <inline-formula> <tex-math notation=\"LaTeX\">$0.36~\\mu \\text{A}$ </tex-math></inline-formula> at SNR = 5, and a sensitivity of 8 nW with APD of 45 A/W, were designed with 3.3 V devices and fabricated in a 0.18-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> standard CMOS process. The total area of AFE, which includes the circuit core, bandgap and bias circuits, and I/O PAD, is approximately equal to <inline-formula> <tex-math notation=\"LaTeX\">$1.20\\times 1.13$ </tex-math></inline-formula> mm<sup>2</sup>.", "title": "" }, { "docid": "4bca13cc04fc128844ecc48c0357b974", "text": "From its roots in physics, mathematics, and biology, the study of complexity science, or complex adaptive systems, has expanded into the domain of organizations and systems of organizations. Complexity science is useful for studying the evolution of complex organizations -entities with multiple, diverse, interconnected elements. Evolution of complex organizations often is accompanied by feedback effects, nonlinearity, and other conditions that add to the complexity of existing organizations and the unpredictability of the emergence of new entities. Health care organizations are an ideal setting for the application of complexity science due to the diversity of organizational forms and interactions among organizations that are evolving. Too, complexity science can benefit from attention to the world’s most complex human organizations. Organizations within and across the health care sector are increasingly interdependent. Not only are new, highly powerful and diverse organizational forms being created, but also the restructuring has occurred within very short periods of time. In this chapter, we review the basic tenets of complexity science. We identify a series of key differences between the complexity science and established theoretical approaches to studying health organizations, based on the ways in which time, space, and constructs are framed. The contrasting perspectives are demonstrated using two case examples drawn from healthcare innovation and healthcare integrated systems research. Complexity science broadens and deepens the scope of inquiry into health care organizations, expands corresponding methods of research, and increases the ability of theory to generate valid research on complex organizational forms. Formatted", "title": "" }, { "docid": "59db435e906db2c198afdc5cc7c7de2c", "text": "Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a non-local means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise.", "title": "" }, { "docid": "95a58a9fa31373296af2c41e47fa0884", "text": "Force.com is the preeminent on-demand application development platform in use today, supporting some 55,000+ organizations. Individual enterprises and commercial software-as-a-service (SaaS) vendors trust the platform to deliver robust, reliable, Internet-scale applications. To meet the extreme demands of its large user population, Force.com's foundation is a metadatadriven software architecture that enables multitenant applications.\n The focus of this paper is multitenancy, a fundamental design approach that can dramatically improve SaaS application management. This paper defines multitenancy, explains its benefits, and demonstrates why metadata-driven architectures are the premier choice for implementing multitenancy.", "title": "" }, { "docid": "5b2b0a3a857d06246cebb69e6e575b5f", "text": "This paper develops a novel framework for feature extraction based on a combination of Linear Discriminant Analysis and cross-correlation. Multiple Electrocardiogram (ECG) signals, acquired from the human heart in different states such as in fear, during exercise, etc. are used for simulations. The ECG signals are composed of P, Q, R, S and T waves. They are characterized by several parameters and the important information relies on its HRV (Heart Rate Variability). Human interpretation of such signals requires experience and incorrect readings could result in potentially life threatening and even fatal consequences. Thus a proper interpretation of ECG signals is of paramount importance. This work focuses on designing a machine based classification algorithm for ECG signals. The proposed algorithm filters the ECG signals to reduce the effects of noise. It then uses the Fourier transform to transform the signals into the frequency domain for analysis. The frequency domain signal is then cross correlated with predefined classes of ECG signals, in a manner similar to pattern recognition. The correlated co-efficients generated are then thresholded. Moreover Linear Discriminant Analysis is also applied. Linear Discriminant Analysis makes classes of different multiple ECG signals. LDA makes classes on the basis of mean, global mean, mean subtraction, transpose, covariance, probability and frequencies. And also setting thresholds for the classes. The distributed space area is divided into regions corresponding to each of the classes. Each region associated with a class is defined by its thresholds. So it is useful in distinguishing ECG signals from each other. And pedantic details from LDA (Linear Discriminant Analysis) output graph can be easily taken in account rapidly. The output generated after applying cross-correlation and LDA displays either normal, fear, smoking or exercise ECG signal. As a result, the system can help clinically on large scale by providing reliable and accurate classification in a fast and computationally efficient manner. The doctors can use this system by gaining more efficiency. As very few errors are involved in it, showing accuracy between 90% 95%.", "title": "" }, { "docid": "361bc333d47d2e1d4b6a6e8654d2659d", "text": "Both the industrial organization theory (IO) and the resource-based view of the firm (RBV) have advanced our understanding of the antecedents of competitive advantage but few have attempted to verify the outcome variables of competitive advantage and the persistence of such outcome variables. Here by integrating both IO and RBV perspectives in the analysis of competitive advantage at the firm level, our study clarifies a conceptual distinction between two types of competitive advantage: temporary competitive advantage and sustainable competitive advantage, and explores how firms transform temporary competitive advantage into sustainable competitive advantage. Testing of the developed hypotheses, based on a survey of 165 firms from Taiwan’s information and communication technology industry, suggests that firms with a stronger market position can only attain a better outcome of temporary competitive advantage whereas firms possessing a superior position in technological resources or capabilities can attain a better outcome of sustainable competitive advantage. More importantly, firms can leverage a temporary competitive advantage as an outcome of market position, to improving their technological resource and capability position, which in turn can enhance their sustainable competitive advantage.", "title": "" }, { "docid": "856e7eeca46eb2c1a27ac0d1b5f0dc0b", "text": "The World Health Organization recommends four antenatal visits for pregnant women in developing countries. Cash transfers have been used to incentivize participation in health services. We examined whether modest cash transfers for participation in antenatal care would increase antenatal care attendance and delivery in a health facility in Kisoro, Uganda. Twenty-three villages were randomized into four groups: 1) no cash; 2) 0.20 United States Dollars (USD) for each of four visits; 3) 0.40 USD for a single first trimester visit only; 4) 0.40 USD for each of four visits. Outcomes were three or more antenatal visits and delivery in a health facility. Chi-square, analysis of variance, and generalized estimating equation analyses were performed to detect differences in outcomes. Women in the 0.40 USD/visit group had higher odds of three or more antenatal visits than the control group (OR 1.70, 95% CI: 1.13-2.57). The odds of delivering in a health facility did not differ between groups. However, women with more antenatal visits had higher odds of delivering in a health facility (OR 1.21, 95% CI: 1.03-1.42). These findings are important in an area where maternal mortality is high, utilization of health services is low, and resources are scarce.", "title": "" }, { "docid": "eaf0693dd5447d58d04e10aef02ef331", "text": "A key step in the semantic analysis of network traffic is to parse the traffic stream according to the high-level protocols it contains. This process transforms raw bytes into structured, typed, and semantically meaningful data fields that provide a high-level representation of the traffic. However, constructing protocol parsers by hand is a tedious and error-prone affair due to the complexity and sheer number of application protocols.This paper presents binpac, a declarative language and compiler designed to simplify the task of constructing robust and efficient semantic analyzers for complex network protocols. We discuss the design of the binpac language and a range of issues in generating efficient parsers from high-level specifications. We have used binpac to build several protocol parsers for the \"Bro\" network intrusion detection system, replacing some of its existing analyzers (handcrafted in C++), and supplementing its operation with analyzers for new protocols. We can then use Bro's powerful scripting language to express application-level analysis of network traffic in high-level terms that are both concise and expressive. binpac is now part of the open-source Bro distribution.", "title": "" }, { "docid": "c61a6e26941409db9cb4a95c05a82785", "text": "An important aspect in visualization design is the connection between what a designer does and the decisions the designer makes. Existing design process models, however, do not explicitly link back to models for visualization design decisions. We bridge this gap by introducing the design activity framework, a process model that explicitly connects to the nested model, a well-known visualization design decision model. The framework includes four overlapping activities that characterize the design process, with each activity explicating outcomes related to the nested model. Additionally, we describe and characterize a list of exemplar methods and how they overlap among these activities. The design activity framework is the result of reflective discussions from a collaboration on a visualization redesign project, the details of which we describe to ground the framework in a real-world design process. Lastly, from this redesign project we provide several research outcomes in the domain of cybersecurity, including an extended data abstraction and rich opportunities for future visualization research.", "title": "" }, { "docid": "baafff8270bf3d33d70544130968f6d3", "text": "The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, /spl rho/(x), from the samples and then looking at the distribution of values that /spl rho/(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent.", "title": "" }, { "docid": "0102748c7f9969fb53a3b5ee76b6eefe", "text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:", "title": "" }, { "docid": "222c51f079c785bb2aa64d2937e50ff0", "text": "Security and privacy in cloud computing are critical components for various organizations that depend on the cloud in their daily operations. Customers' data and the organizations' proprietary information have been subject to various attacks in the past. In this paper, we develop a set of Moving Target Defense (MTD) strategies that randomize the location of the Virtual Machines (VMs) to harden the cloud against a class of Multi-Armed Bandit (MAB) policy-based attacks. These attack policies capture the behavior of adversaries that seek to explore the allocation of VMs in the cloud and exploit the ones that provide the highest rewards (e.g., access to critical datasets, ability to observe credit card transactions, etc). We assess through simulation experiments the performance of our MTD strategies, showing that they can make MAB policy-based attacks no more effective than random attack policies. Additionally, we show the effects of critical parameters – such as discount factors, the time between randomizing the locations of the VMs and variance in the rewards obtained – on the performance of our defenses. We validate our results through simulations and a real OpenStack system implementation in our lab to assess migration times and down times under different system loads.", "title": "" }, { "docid": "3230ef371e7475cfa82c7ab240fdd610", "text": "After a decade of fundamental interdisciplinary research in machine learning, the spadework in this field has been done; the 1990s should see the widespread exploitation of knowledge discovery as an aid to assembling knowledge bases. The contributors to the AAAI Press book Knowledge Discovery in Databases were excited at the potential benefits of this research. The editors hope that some of this excitement will communicate itself to \"AI Magazine readers of this article.", "title": "" }, { "docid": "424b80d94ec00c6795d8c8a689c1d119", "text": "With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.", "title": "" }, { "docid": "8fd28fb7c30c3dc30d4a92f95d38c966", "text": "In recent years, iris recognition is becoming a very active topic in both research and practical applications. However, fake iris is a potential threat there are potential threats for iris-based systems. This paper presents a novel fake iris detection method based on the analysis of 2-D Fourier spectra together with iris image quality assessment. First, image quality assessment method is used to exclude the defocused, motion blurred fake iris. Then statistical properties of Fourier spectra for fake iris are used for clear fake iris detection. Experimental results show that the proposed method can detect photo iris and printed iris effectively.", "title": "" }, { "docid": "5f01cb5c34ac9182f6485f70d19101db", "text": "Gastroeophageal reflux is a condition in which the acidified liquid content of the stomach backs up into the esophagus. The antiacid magaldrate and prokinetic domperidone are two drugs clinically used for the treatment of gastroesophageal reflux symptoms. However, the evidence of a superior effectiveness of this combination in comparison with individual drugs is lacking. A double-blind, randomized and comparative clinical trial study was designed to characterize the efficacy and safety of a fixed dose combination of magaldrate (800 mg)/domperidone (10 mg) against domperidone alone (10 mg), in patients with gastroesophageal reflux symptoms. One hundred patients with gastroesophageal reflux diagnosed by Carlsson scale were randomized to receive a chewable tablet of a fixed dose of magaldrate/domperidone combination or domperidone alone four times each day during a month. Magaldrate/domperidone combination showed a superior efficacy to decrease global esophageal (pyrosis, regurgitation, dysphagia, hiccup, gastroparesis, sialorrhea, globus pharyngeus and nausea) and extraesophageal (chronic cough, hoarseness, asthmatiform syndrome, laryngitis, pharyngitis, halitosis and chest pain) reflux symptoms than domperidone alone. In addition, magaldrate/domperidone combination improved in a statistically manner the quality of life of patients with gastroesophageal reflux respect to monotherapy, and more patients perceived the combination as a better treatment. Both treatments were well tolerated. Data suggest that oral magaldrate/domperidone mixture could be a better option in the treatment of gastroesophageal reflux symptoms than only domperidone.", "title": "" }, { "docid": "7228073bef61131c2efcdc736d90ca1b", "text": "With the advent of word representations, word similarity tasks are becoming increasing popular as an evaluation metric for the quality of the representations. In this paper, we present manually annotated monolingual word similarity datasets of six Indian languages – Urdu, Telugu, Marathi, Punjabi, Tamil and Gujarati. These languages are most spoken Indian languages worldwide after Hindi and Bengali. For the construction of these datasets, our approach relies on translation and re-annotation of word similarity datasets of English. We also present baseline scores for word representation models using state-of-the-art techniques for Urdu, Telugu and Marathi by evaluating them on newly created word similarity datasets.", "title": "" }, { "docid": "511c90eadbbd4129fdf3ee9e9b2187d3", "text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.", "title": "" }, { "docid": "5a1f4efc96538c1355a2742f323b7a0e", "text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.", "title": "" } ]
scidocsrr
d13b9b82be0cc86e59f4579988430fc0
Pairs trading strategy optimization using the reinforcement learning method: a cointegration approach
[ { "docid": "f72f55da6ec2fdf9d0902648571fd9fc", "text": "Recently, numerous investigations for stock price prediction and portfolio management using machine learning have been trying to develop efficient mechanical trading systems. But these systems have a limitation in that they are mainly based on the supervised leaming which is not so adequate for leaming problems with long-term goals and delayed rewards. This paper proposes a method of applying reinforcement leaming, suitable for modeling and leaming various kinds of interactions in real situations, to the problem of stock price prediction. The stock price prediction problem is considered as Markov process which can be optimized by reinforcement learning based algorithm. TD(O), a reinforcement learning algorithm which leams only from experiences, is adopted and function approximation by artificial neural network is performed to leam the values of states each of which corresponds to a stock price trend at a given time. An experimental result based on the Korean stock market is presented to evaluate the performance of the proposed method.", "title": "" }, { "docid": "51f2ba8b460be1c9902fb265b2632232", "text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.", "title": "" }, { "docid": "427796f5c37e41363c1664b47596eacf", "text": "A trading and portfolio management system called QSR is proposed. It uses Q-learning and Sharpe ratio maximization algorithm. We use absolute proot and relative risk-adjusted proot as performance function to train the system respectively, and employ a committee of two networks to do the testing. The new proposed algorithm makes use of the advantages of both parts and can be used in a more general case. We demonstrate with experimental results that the proposed approach generates appreciable proots from trading in the foreign exchange markets.", "title": "" } ]
[ { "docid": "30fda7dabb70dffbf297096671802c93", "text": "Much attention has recently been given to a printing method because they are easily designable, have a low cost, and can be mass produced. Numerous electronic devices are fabricated using printing methods because of these advantages. In paper mechatronics, attempts have been made to fabricate robots by printing on paper substrates. The robots are given structures through self-folding and functions using printed actuators. We developed a new system and device to fabricate more sophisticated printed robots. First, we successfully fabricated complex self-folding structures by applying an automatic cutting. Second, a rapidly created and low-voltage electrothermal actuator was developed using an inkjet printed circuit. Finally, a printed robot was fabricated by combining two techniques and two types of paper; a structure design paper and a circuit design paper. Gripper and conveyor robots were fabricated, and their functions were verified. These works demonstrate the possibility of paper mechatronics for rapid and low-cost prototyping as well as of printed robots.", "title": "" }, { "docid": "58c488555240ded980033111a9657be4", "text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.", "title": "" }, { "docid": "31a2e6948a816a053d62e3748134cdc2", "text": "In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent’s representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments.", "title": "" }, { "docid": "ba7701a94880b59bbbd49fbfaca4b8c3", "text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised algorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point location. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing textureand color-based region discriminant functions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways.", "title": "" }, { "docid": "10d380b25a03c608c11fe5dde545f4b4", "text": "The increasing complexity and diversity of technical products plus the massive amount of product-related data overwhelms humans dealing with them at all stages of the life-cycle. We present a novel architecture for building smart products that are able to interact with humans in a natural and proactive way, and assist and guide them in performing their tasks. Further, we show how communication capabilities of smart products are used to account for the limited resources of individual products by leveraging resources provided by the environment or other smart products for storage and natural interaction.", "title": "" }, { "docid": "dffb89c39f11934567f98a31a0ef157c", "text": "We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain strong results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset.", "title": "" }, { "docid": "97ba22fa685384e9dfd0402798fe7019", "text": "We consider the problems of i) using public-key encryption to enforce dynamic access control on clouds; and ii) key rotation of data stored on clouds. Historically, proxy re-encryption, ciphertext delegation, and related technologies have been advocated as tools that allow for revocation and the ability to cryptographically enforce dynamic access control on the cloud, and more recently they have suggested for key rotation of data stored on clouds. Current literature frequently assumes that data is encrypted directly with public-key encryption primitives. However, for efficiency reasons systems would need to deploy with hybrid encryption. Unfortunately, we show that if hybrid encryption is used, then schemes are susceptible to a key-scraping attack. Given a proxy re-encryption or delegation primitive, we show how to construct a new hybrid scheme that is resistant to this attack and highly efficient. The scheme only requires the modification of a small fraction of the bits of the original ciphertext. The number of modifications scales linearly with the security parameter and logarithmically with the file length: it does not require the entire symmetric-key ciphertext to be re-encrypted! Beyond the construction, we introduce new security definitions for the problem at hand, prove our construction secure, discuss use cases, and provide quantitative data showing its practical benefits and efficiency. We show the construction extends to identity-based proxy re-encryption and revocable-storage attribute-based encryption, and thus that the construction is robust, supporting most primitives of interest.", "title": "" }, { "docid": "22ab8eb2b8eaafb2ee72ea0ed7148ca4", "text": "As travel is taking more significant part in our life, route recommendation service becomes a big business and attracts many major players in IT industry. Given a pair of user-specified origin and destination, a route recommendation service aims to provide users with the routes of best travelling experience according to criteria, such as travelling distance, travelling time, traffic condition, etc. However, previous research shows that even the routes recommended by the big-thumb service providers can deviate significantly from the routes travelled by experienced drivers. It means travellers' preferences on route selection are influenced by many latent and dynamic factors that are hard to model exactly with pre-defined formulas. In this work we approach this challenging problem with a very different perspective- leveraging crowds' knowledge to improve the recommendation quality. In this light, CrowdPlanner - a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidate routes recommended by different sources and methods, and determine the best route based on their feedbacks. In this paper, we particularly focus on two important issues that affect system performance significantly: (1) how to efficiently generate tasks which are simple to answer but possess sufficient information to derive user-preferred routes; and (2) how to quickly identify a set of appropriate domain experts to answer the questions timely and accurately. Specifically, the task generation component in our system generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer. In addition, the worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy. A prototype system has been deployed to many voluntary mobile clients and extensive tests on real-scenario queries have shown the superiority of CrowdPlanner in comparison with the results given by map services and popular route mining algorithms.", "title": "" }, { "docid": "8fa135e5d01ba2480dea4621ceb1e9f4", "text": "With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.", "title": "" }, { "docid": "493893b0eb606477b3d0a5b10ddf9ade", "text": "While new therapies for chronic hepatitis C virus infection have delivered remarkable cure rates, curative therapies for chronic hepatitis B virus (HBV) infection remain a distant goal. Although current direct antiviral therapies are very efficient in controlling viral replication and limiting the progression to cirrhosis, these treatments require lifelong administration due to the frequent viral rebound upon treatment cessation, and immune modulation with interferon is only effective in a subgroup of patients. Specific immunotherapies can offer the possibility of eliminating or at least stably maintaining low levels of HBV replication under the control of a functional host antiviral response. Here, we review the development of immune cell therapy for HBV, highlighting the potential antiviral efficiency and potential toxicities in different groups of chronically infected HBV patients. We also discuss the chronic hepatitis B patient populations that best benefit from therapeutic immune interventions.", "title": "" }, { "docid": "11ecb3df219152d33020ba1c4f8848bb", "text": "Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.", "title": "" }, { "docid": "15ad5044900511277e0cd602b0c07c5e", "text": "Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls) were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM) across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.", "title": "" }, { "docid": "eedcff8c2a499e644d1343b353b2a1b9", "text": "We consider the problem of finding related tables in a large corpus of heterogenous tables. Detecting related tables provides users a powerful tool for enhancing their tables with additional data and enables effective reuse of available public data. Our first contribution is a framework that captures several types of relatedness, including tables that are candidates for joins and tables that are candidates for union. Our second contribution is a set of algorithms for detecting related tables that can be either unioned or joined. We describe a set of experiments that demonstrate that our algorithms produce highly related tables. We also show that we can often improve the results of table search by pulling up tables that are ranked much lower based on their relatedness to top-ranked tables. Finally, we describe how to scale up our algorithms and show the results of running it on a corpus of over a million tables extracted from Wikipedia.", "title": "" }, { "docid": "382ac4d3ba3024d0c760cff1eef505c3", "text": "We seek to close the gap between software engineering (SE) and human-computer interaction (HCI) by indicating interdisciplinary interfaces throughout the different phases of SE and HCI lifecycles. As agile representatives of SE, Extreme Programming (XP) and Agile Modeling (AM) contribute helpful principles and practices for a common engineering approach. We present a cross-discipline user interface design lifecycle that integrates SE and HCI under the umbrella of agile development. Melting IT budgets, pressure of time and the demand to build better software in less time must be supported by traveling as light as possible. We did, therefore, choose not just to mediate both disciplines. Following our surveys, a rather radical approach best fits the demands of engineering organizations.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "5eb9e759ec8fc9ad63024130f753d136", "text": "A 3-10 GHz broadband CMOS T/R switch for ultra-wideband (UWB) transceiver is presented. The broadband CMOS T/R switch is fabricated based on the 0.18 mu 1P6M standard CMOS process. On-chip measurement of the CMOS T/R switch is performed. The insertion loss of the proposed CMOS T/R Switch is about 3.1plusmn1.3dB. The return losses at both input and output terminals are higher than 14 dB. It is also characterized with 25-34dB isolation and 18-20 dBm input P1dB. The broadband CMOS T/R switch shows highly linear phase and group delay of 20plusmn10 ps from 10MHz to 15GHz. It can be easily integrated with other CMOS RFICs to form on-chip transceivers for various UWB applications", "title": "" }, { "docid": "71a4399f8ccbeee4dced4d2eba3cf9ff", "text": "Generating text from structured data is important for various tasks such as question answering and dialog systems. We show that in at least one domain, without any supervision and only based on unlabeled text, we are able to build a Natural Language Generation (NLG) system with higher performance than supervised approaches. In our approach, we interpret the structured data as a corrupt representation of the desired output and use a denoising auto-encoder to reconstruct the sentence. We show how to introduce noise into training examples that do not contain structured data, and that the resulting denoising auto-encoder generalizes to generate correct sentences when given structured data.", "title": "" }, { "docid": "081b15c3dda7da72487f5a6e96e98862", "text": "The CEDAR real-time address block location system, which determines candidates for the location of the destination address from a scanned mail piece image, is described. For each candidate destination address block (DAB), the address block location (ABL) system determines the line segmentation, global orientation, block skew, an indication of whether the address appears to be handwritten or machine printed, and a value indicating the degree of confidence that the block actually contains the destination address. With 20-MHz Sparc processors, the average time per mail piece for the combined hardware and software system components is 0.210 seconds. The system located 89.0% of the addresses as the top choice. Recent developments in the system include the use of a top-down segmentation tool, address syntax analysis using only connected component data, and improvements to the segmentation refinement routines. This has increased top choice performance to 91.4%.<<ETX>>", "title": "" }, { "docid": "da36aa77b26e5966bdb271da19bcace3", "text": "We present Brian, a new clock driven simulator for spiking neural networks which is available on almost all platforms. Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is very well adapted to these goals. Python is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages. Brian allows you to write very concise, natural and readable code for simulations, and makes it quick and efficient to play with these models (for example, changing the differential equations doesn't require a recompile of the code). Figure 1 shows an example of a complete network implemented in Brian, a randomly connected network of integrate and fire neurons with exponential inhibitory and excitatory currents (the CUBA network from [1]). Defining the model, running from Seventeenth Annual Computational Neuroscience Meeting: CNS*2008 Portland, OR, USA. 19–24 July 2008", "title": "" }, { "docid": "10a6bccb77b6b94149c54c9e343ceb6c", "text": "Clone detectors find similar code fragments (i.e., instances of code clones) and report large numbers of them for industrial systems. To maintain or manage code clones, developers often have to investigate differences of multiple cloned code fragments. However,existing program differencing techniques compare only two code fragments at a time. Developers then have to manually combine several pairwise differencing results. In this paper, we present an approach to automatically detecting differences across multiple clone instances. We have implemented our approach as an Eclipse plugin and evaluated its accuracy with three Java software systems. Our evaluation shows that our algorithm has precision over 97.66% and recall over 95.63% in three open source Java projects. We also conducted a user study of 18 developers to evaluate the usefulness of our approach for eight clone-related refactoring tasks. Our study shows that our approach can significantly improve developers’performance in refactoring decisions, refactoring details, and task completion time on clone-related refactoring tasks. Automatically detecting differences across multiple clone instances also opens opportunities for building practical applications of code clones in software maintenance, such as auto-generation of application skeleton, intelligent simultaneous code editing.", "title": "" } ]
scidocsrr
0707164d1a28b85444377ab859a6b9d5
FPGA implementation of an advanced encoding and decoding architecture of polar codes
[ { "docid": "fb63ab21fa40b125c1a85b9c3ed1dd8d", "text": "The two central topics of information theory are the compression and the transmission of data. Shannon, in his seminal work, formalized both these problems and determined their fundamental limits. Since then the main goal of coding theory has been to find practical schemes that approach these limits. Polar codes, recently invented by Arıkan, are the first “practical” codes that are known to achieve the capacity for a large class of channels. Their code construction is based on a phenomenon called “channel polarization”. The encoding as well as the decoding operation of polar codes can be implemented with O(N log N) complexity, where N is the blocklength of the code. We show that polar codes are suitable not only for channel coding but also achieve optimal performance for several other important problems in information theory. The first problem we consider is lossy source compression. We construct polar codes that asymptotically approach Shannon’s rate-distortion bound for a large class of sources. We achieve this performance by designing polar codes according to the “test channel”, which naturally appears in Shannon’s formulation of the rate-distortion function. The encoding operation combines the successive cancellation algorithm of Arıkan with a crucial new ingredient called “randomized rounding”. As for channel coding, both the encoding as well as the decoding operation can be implemented with O(N log N) complexity. This is the first known “practical” scheme that approaches the optimal rate-distortion trade-off. We also construct polar codes that achieve the optimal performance for the Wyner-Ziv and the Gelfand-Pinsker problems. Both these problems can be tackled using “nested” codes and polar codes are naturally suited for this purpose. We further show that polar codes achieve the capacity of asymmetric channels, multi-terminal scenarios like multiple access channels, and degraded broadcast channels. For each of these problems, our constructions are the first known “practical” schemes that approach the optimal performance. The original polar codes of Arıkan achieve a block error probability decaying exponentially in the square root of the block length. For source coding, the gap between the achieved distortion and the limiting distortion also vanishes exponentially in the square root of the blocklength. We explore other polarlike code constructions with better rates of decay. With this generalization,", "title": "" } ]
[ { "docid": "8fccceb2757decb670eed84f4b2405a1", "text": "This paper develops and evaluates search and optimization techniques for autotuning 3D stencil (nearest neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Our proposed framework takes a most concise specification of stencil behavior from the user as a single formula, autogenerates tunable code from it, systematically searches for the best configuration and generates the code with optimal parameter configurations for different GPUs. This autotuning approach guarantees adaptive performance for different generations of GPUs while greatly enhancing programmer productivity. Experimental results show that the delivered floating point performance is very close to previous handcrafted work and outperforms other autotuned stencil codes by a large margin. Furthermore, heterogeneous GPU clusters are shown to exhibit the highest performance for dissimilar tuning parameters leveraging proportional partitioning relative to single-GPU performance.", "title": "" }, { "docid": "55631b81d46fc3dcaad8375176cb1c68", "text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.", "title": "" }, { "docid": "73cfe07d02651eee42773824d03dcfa1", "text": "Discovery of usage patterns from Web data is one of the primary purposes for Web Usage Mining. In this paper, a technique to generate Significant Usage Patterns (SUP) is proposed and used to acquire significant “user preferred navigational trails”. The technique uses pipelined processing phases including sub-abstraction of sessionized Web clickstreams, clustering of the abstracted Web sessions, concept-based abstraction of the clustered sessions, and SUP generation. Using this technique, valuable customer behavior information can be extracted by Web site practitioners. Experiments conducted using Web log data provided by J.C.Penney demonstrate that SUPs of different types of customers are distinguishable and interpretable. This technique is particularly suited for analysis of dynamic websites.", "title": "" }, { "docid": "f0db74061a2befca317f9333a0712ab9", "text": "This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.", "title": "" }, { "docid": "db4bb32f6fdc7a05da41e223afac3025", "text": "Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: \"noise\" characterization and suppression, and \"signal\" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources.", "title": "" }, { "docid": "ee865e3291eff95b5977b54c22b59f19", "text": "Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls.", "title": "" }, { "docid": "980565c38859db2df10db238d8a4dc61", "text": "Performing High Voltage (HV) tasks with a multi craft work force create a special set of safety circumstances. This paper aims to present vital information relating to when it is acceptable to use a single or a two-layer soil structure. Also it discusses the implication of the high voltage infrastructure on the earth grid and the safety of this implication under a single or a two-layer soil structure. A multiple case study is investigated to show the importance of using the right soil resistivity structure during the earthing system design. Keywords—Earth Grid, EPR, High Voltage, Soil Resistivity Structure, Step Voltage, Touch Voltage.", "title": "" }, { "docid": "d06d09c38988dffce44068986f912c6d", "text": "Depression, the most prevalent mental illness, is underdiagnosed and undertreated, highlighting the need to extend the scope of current screening methods. Here, we use language from Facebook posts of consenting individuals to predict depression recorded in electronic medical records. We accessed the history of Facebook statuses posted by 683 patients visiting a large urban academic emergency department, 114 of whom had a diagnosis of depression in their medical records. Using only the language preceding their first documentation of a diagnosis of depression, we could identify depressed patients with fair accuracy [area under the curve (AUC) = 0.69], approximately matching the accuracy of screening surveys benchmarked against medical records. Restricting Facebook data to only the 6 months immediately preceding the first documented diagnosis of depression yielded a higher prediction accuracy (AUC = 0.72) for those users who had sufficient Facebook data. Significant prediction of future depression status was possible as far as 3 months before its first documentation. We found that language predictors of depression include emotional (sadness), interpersonal (loneliness, hostility), and cognitive (preoccupation with the self, rumination) processes. Unobtrusive depression assessment through social media of consenting individuals may become feasible as a scalable complement to existing screening and monitoring procedures.", "title": "" }, { "docid": "54b43b5e3545710dfe37f55b93084e34", "text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.", "title": "" }, { "docid": "ed414502134a7423af6b54f17db72e8e", "text": "Chatbots have been used in different scenarios for getting people interested in CS for decades. However, their potential for teaching basic concepts and their engaging effect has not been measured. In this paper we present a software platform called Chatbot designed to foster engagement while teaching basic CS concepts such as variables, conditionals and finite state automata, among others. We carried out two experiences using Chatbot and the well known platform Alice: 1) an online nation-wide competition, and 2) an in-class 15-lesson pilot course in 2 high schools. Data shows that retention and girl interest are higher with Chatbot than with Alice, indicating student engagement.", "title": "" }, { "docid": "f39e5ef91fa130144ac245344ea20a91", "text": "The development of automatic visual control system is a very important research topic in computer vision. This face identification system must be robust to the various quality of the images such as light, face expression, glasses, beards, moustaches etc. We propose using the wavelet transformation algorithms for reduction the source data space. We have realized the method of the expansion of the values of pixels to the whole intensity range and the algorithm of the equalization of histogram to adjust image intensity values. The support vector machines (SVM) technology has been used for the face recognition in our work.", "title": "" }, { "docid": "340dd41b4236285433403da3eb99ee08", "text": "Gut microbiota is an assortment of microorganisms inhabiting the length and width of the mammalian gastrointestinal tract. The composition of this microbial community is host specific, evolving throughout an individual's lifetime and susceptible to both exogenous and endogenous modifications. Recent renewed interest in the structure and function of this \"organ\" has illuminated its central position in health and disease. The microbiota is intimately involved in numerous aspects of normal host physiology, from nutritional status to behavior and stress response. Additionally, they can be a central or a contributing cause of many diseases, affecting both near and far organ systems. The overall balance in the composition of the gut microbial community, as well as the presence or absence of key species capable of effecting specific responses, is important in ensuring homeostasis or lack thereof at the intestinal mucosa and beyond. The mechanisms through which microbiota exerts its beneficial or detrimental influences remain largely undefined, but include elaboration of signaling molecules and recognition of bacterial epitopes by both intestinal epithelial and mucosal immune cells. The advances in modeling and analysis of gut microbiota will further our knowledge of their role in health and disease, allowing customization of existing and future therapeutic and prophylactic modalities.", "title": "" }, { "docid": "530248eb40b4bf26824713b537d8e197", "text": "A novel comb-structure-based, capacitive MEMS microphone concept is proposed, which is expected to significantly reduce viscous damping losses, challenging the high performance of conventional MEMS microphones. To this end, we derived a dedicated, fully energy-coupled and properly calibrated system-level model scaling with all relevant design parameters. It enables to discriminate in detail the impact of the individual components like transducer, package and electrostatic read out to the overall signal-to-noise-ratio (SNR) of the microphone and hence, to identify the optimal design of the device. Measurements of first prototypes show promising results and agree very well with simulations demonstrating the predictive power of the model w.r.t. to further optimization.", "title": "" }, { "docid": "fe38de8c129845b86ee0ec4acf865c14", "text": "0 7 4 0 7 4 5 9 / 0 2 / $ 1 7 . 0 0 © 2 0 0 2 I E E E McDonald’s develop product lines. But software product lines are a relatively new concept. They are rapidly emerging as a practical and important software development paradigm. A product line succeeds because companies can exploit their software products’ commonalities to achieve economies of production. The Software Engineering Institute’s (SEI) work has confirmed the benefits of pursuing this approach; it also found that doing so is both a technical and business decision. To succeed with software product lines, an organization must alter its technical practices, management practices, organizational structure and personnel, and business approach.", "title": "" }, { "docid": "36a5de24f61c4113ba96adcfb5fe192d", "text": "This paper presents a control method for a quadrotor equipped with a multi-DOF manipulator to transport a common object to the desired position. By considering a quadrotor and robot arm as a combined system, called quadrorot-manipulator system, the kinematic and dynamitic models are built together in a general version using Euler-Lagrange(EL) equations. The impact on the quadrotormanipulator system caused by the object is also considered. The transportation task can be decomposed into five steps. The planning trajectory can be obtained when the initial and the fianl position of the object is given. With the combined dynamic model, we propose a control scheme consisting of position controller, attitude controller and manipulator controller to track the planning trajectory. To validate our approach, the simulation results of a transportation task with a quadrotor with a 2-DOF manipulator was presented.", "title": "" }, { "docid": "feeb5741fae619a37f44eae46169e9d1", "text": "A 24-GHz novel active quasi-circulator is developed in TSMC 0.18-µm CMOS. We proposed a new architecture by using the canceling mechanism to achieve high isolations and reduce the circuit area. The measured insertion losses |S<inf>32</inf>| and |S<inf>21</inf>| are 9 and 8.5 dB, respectively. The isolation |S<inf>31</inf>| is greater than 30 dB. The dc power consumption is only 9.12 mW with a chip size of 0.35 mm<sup>2</sup>.", "title": "" }, { "docid": "9131f56c00023a3402b602940be621bb", "text": "Location estimation of a wireless capsule endoscope at 400 MHz MICS band is implemented here using both RSSI and TOA-based techniques and their performance investigated. To improve the RSSI-based location estimation, a maximum likelihood (ML) estimation method is employed. For the TOA-based localization, FDTD coupled with continuous wavelet transform (CWT) is used to estimate the time of arrival and localization is performed using multilateration. The performances of the proposed localization algorithms are evaluated using a computational heterogeneous biological tissue phantom in the 402MHz-405MHz MICS band. Our investigations reveal that the accuracy obtained by TOA based method is superior to RSSI based estimates. It has been observed that the ML method substantially improves the accuracy of the RSSI-based location estimation.", "title": "" }, { "docid": "90c99c40bfecf75534be0c09d955a207", "text": "Massive Open Online Courses (MOOCs) have been playing a pivotal role among the latest e-learning initiative and obtain widespread popularity in many universities. But the low course completion rate and the high midway dropout rate of students have puzzled some researchers and designers of MOOCs. Therefore, it is important to explore the factors affecting students’ continuance intention to use MOOCs. This study integrates task-technology fit which can explain how the characteristics of task and technology affect the outcome of technology utilization into expectationconfirmation model to analyze the factors influencing students’ keeping using MOOCs and the relationships of constructs in the model, then it will also extend our understandings of continuance intention about MOOCs. We analyze and study 234 respondents, and results reveal that perceived usefulness, satisfaction and task-technology fit are important precedents of the intention to continue using MOOCs. Researchers and designers of MOOCs may obtain further insight in continuance intention about MOOCs.", "title": "" }, { "docid": "ea9bafe86af4418fa51abe27a2c2180b", "text": "In this work, we propose a novel phenomenological model of the EEG signal based on the dynamics of a coupled Duffing-van der Pol oscillator network. An optimization scheme is adopted to match data generated from the model with clinically obtained EEG data from subjects under resting eyes-open (EO) and eyes-closed (EC) conditions. It is shown that a coupled system of two Duffing-van der Pol oscillators with optimized parameters yields signals with characteristics that match those of the EEG in both the EO and EC cases. The results, which are reinforced using statistical analysis, show that the EEG recordings under EC and EO resting conditions are clearly distinct realizations of the same underlying model occurring due to parameter variations with qualitatively different nonlinear dynamic characteristics. In addition, the interplay between noise and nonlinearity is addressed and it is shown that, for appropriately chosen values of noise intensity in the model, very good agreement exists between the model output and the EEG in terms of the power spectrum as well as Shannon entropy. In summary, the results establish that an appropriately tuned stochastic coupled nonlinear oscillator network such as the Duffing-van der Pol system could provide a useful framework for modeling and analysis of the EEG signal. In turn, design of algorithms based on the framework has the potential to positively impact the development of novel diagnostic strategies for brain injuries and disorders. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cf14aef4996383bf29140bed9754150c", "text": "Author’s Details: (1) Titik Aryati (2) Eny Purwaningsih (1) (2) Faculty of Economics and Business Trisakti University titikar@yahoo.com Abstract: The purpose of this research is to determine the influence of diversification and corporate social responsibility on earnings management with audit committee effectiveness as moderating variable. Discretionary accrual used as a proxy for earnings management. Data for this research were obtained secondary data. There are 299 manufacturing companies listed in IDX on 2011-2015 used as sample. This research uses multiple regression technique as method of analysis.This research results show that the business diversification has a positive influence to earnings management. Meanwhile, geographic diversification and corporate social responsibility does not have any influence to earnings management. The results on moderating variable show that the expertise of audit committee can moderate the relationship between the business diversification and earnings management.", "title": "" } ]
scidocsrr
f747d5351707e12c29021a9b41ca5792
Effectiveness of virtual reality-based pain control with multiple treatments.
[ { "docid": "bf6d56c2fd716802b8e2d023f86a4225", "text": "This is the first case report to demonstrate the efficacy of immersive computer-generated virtual reality (VR) and mixed reality (touching real objects which patients also saw in VR) for the treatment of spider phobia. The subject was a 37-yr-old female with severe and incapacitating fear of spiders. Twelve weekly 1-hr sessions were conducted over a 3-month period. Outcome was assessed on measures of anxiety, avoidance, and changes in behavior toward real spiders. VR graded exposure therapy was successful for reducing fear of spiders providing converging evidence for a growing literature showing the effectiveness of VR as a new medium for exposure therapy.", "title": "" } ]
[ { "docid": "750846bc27dc013bd0d392959caf3ecc", "text": "Analysis of the WinZip en ryption method Tadayoshi Kohno May 8, 2004 Abstra t WinZip is a popular ompression utility for Mi rosoft Windows omputers, the latest version of whi h is advertised as having \\easy-to-use AES en ryption to prote t your sensitive data.\" We exhibit several atta ks against WinZip's new en ryption method, dubbed \\AE-2\" or \\Advan ed En ryption, version two.\" We then dis uss se ure alternatives. Sin e at a high level the underlying WinZip en ryption method appears se ure (the ore is exa tly En ryptthen-Authenti ate using AES-CTR and HMAC-SHA1), and sin e one of our atta ks was made possible be ause of the way that WinZip Computing, In . de ided to x a di erent se urity problem with its previous en ryption method AE-1, our atta ks further unders ore the subtlety of designing ryptographi ally se ure software.", "title": "" }, { "docid": "96d8971bf4a8d18f4471019796348e1b", "text": "Most wired active electrodes reported so far have a gain of one and require at least three wires. This leads to stiff cables, large connectors and additional noise for the amplifier. The theoretical advantages of amplifying the signal on the electrodes right from the source has often been described, however, rarely implemented. This is because a difference in the gain of the electrodes due to component tolerances strongly limits the achievable common mode rejection ratio (CMRR). In this paper, we introduce an amplifier for bioelectric events where the major part of the amplification (40 dB) is achieved on the electrodes to minimize pick-up noise. The electrodes require only two wires of which one can be used for shielding, thus enabling smaller connecters and smoother cables. Saturation of the electrodes is prevented by a dc-offset cancelation scheme with an active range of /spl plusmn/250 mV. This error feedback simultaneously allows to measure the low frequency components down to dc. This enables the measurement of slow varying signals, e.g., the change of alertness or the depolarization before an epileptic seizure normally not visible in a standard electroencephalogram (EEG). The amplifier stage provides the necessary supply current for the electrodes and generates the error signal for the feedback loop. The amplifier generates a pseudodifferential signal where the amplified bioelectric event is present on one lead, but the common mode signal is present on both leads. Based on the pseudodifferential signal we were able to develop a new method to compensate for a difference in the gain of the active electrodes which is purely software based. The amplifier system is then characterized and the input referred noise as well as the CMRR are measured. For the prototype circuit the CMRR evaluated to 78 dB (without the driven-right-leg circuit). The applicability of the system is further demonstrated by the recording of an ECG.", "title": "" }, { "docid": "2006a3fd87a3d7228b2a25061f7eb06b", "text": "Thailand suffers from frequent flooding during the monsoon season and droughts in summer. In some places, severe cases of both may even occur. Managing water resources effectively requires a good information system for decision-making. There is currently a lack in knowledge sharing between organizations and researchers responsible. These are the experts in monitoring and controlling the water supply and its conditions. The knowledge owned by these experts are not captured, classified and integrated into an information system for decisionmaking. Ontologies are formal knowledge representation models. Knowledge management and artificial intelligence technology is a basic requirement for developing ontology-based semantic search on the Web. In this paper, we present ontology modeling approach that is based on the experiences of the researchers. The ontology for drought management consists of River Basin Ontology, Statistics Ontology and Task Ontology to facilitate semantic match during search. The hybrid ontology architecture can also be used for drought management", "title": "" }, { "docid": "2a987f50527c4b4501ae29493f703e32", "text": "The emergence of novel techniques for automatic anomaly detection in surveillance videos has significantly reduced the burden of manual processing of large, continuous video streams. However, existing anomaly detection systems suffer from a high false-positive rate and also, are not real-time, which makes them practically redundant. Furthermore, their predefined feature selection techniques limit their application to specific cases. To overcome these shortcomings, a dynamic anomaly detection and localization system is proposed, which uses deep learning to automatically learn relevant features. In this technique, each video is represented as a group of cubic patches for identifying local and global anomalies. A unique sparse denoising autoencoder architecture is used, that significantly reduced the computation time and the number of false positives in frame-level anomaly detection by more than 2.5%. Experimental analysis on two benchmark data sets - UMN dataset and UCSD Pedestrian dataset, show that our algorithm outperforms the state-of-the-art models in terms of false positive rate, while also showing a significant reduction in computation time.", "title": "" }, { "docid": "199d2f3d640fbb976ef27c8d129922ef", "text": "Federated learning enables resource-constrained edge compute devices, such as mobile phones and IoT devices, to learn a shared model for prediction, while keeping the training data local. This decentralized approach to train models provides privacy, security, regulatory and economic benefits. In this work, we focus on the statistical challenge of federated learning when local data is non-IID. We first show that the accuracy of federated learning reduces significantly, by up to ~55% for neural networks trained for highly skewed non-IID data, where each client device trains only on a single class of data. We further show that this accuracy reduction can be explained by the weight divergence, which can be quantified by the earth mover’s distance (EMD) between the distribution over classes on each device and the population distribution. As a solution, we propose a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices. Experiments show that accuracy can be increased by ~30% for the CIFAR-10 dataset with only 5% globally shared data.", "title": "" }, { "docid": "651d048aaae1ce1608d3d9f0f09d4b9b", "text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.", "title": "" }, { "docid": "90738b84c4db0a267c7213c923368e6a", "text": "Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.", "title": "" }, { "docid": "d568194d6b856243056c072c96c76115", "text": "OBJECTIVE\nTo develop an evidence-based guideline to help clinicians make decisions about when and how to safely taper and stop antipsychotics; to focus on the highest level of evidence available and seek input from primary care professionals in the guideline development, review, and endorsement processes.\n\n\nMETHODS\nThe overall team comprised 9 clinicians (1 family physician, 1 family physician specializing in long-term care, 1 geriatric psychiatrist, 2 geriatricians, 4 pharmacists) and a methodologist; members disclosed conflicts of interest. For guideline development, a systematic process was used, including the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Evidence was generated from a Cochrane systematic review of antipsychotic deprescribing trials for the behavioural and psychological symptoms of dementia, and a systematic review was conducted to assess the evidence behind the benefits of using antipsychotics for insomnia. A review of reviews of the harms of continued antipsychotic use was performed, as well as narrative syntheses of patient preferences and resource implications. This evidence and GRADE quality-of-evidence ratings were used to generate recommendations. The team refined guideline content and recommendation wording through consensus and synthesized clinical considerations to address common front-line clinician questions. The draft guideline was distributed to clinicians and stakeholders for review and revisions were made at each stage.\n\n\nRECOMMENDATIONS\nWe recommend deprescribing antipsychotics for adults with behavioural and psychological symptoms of dementia treated for at least 3 months (symptoms stabilized or no response to an adequate trial) and for adults with primary insomnia treated for any duration or secondary insomnia in which underlying comorbidities are managed. A decision-support algorithm was developed to accompany the guideline.\n\n\nCONCLUSION\nAntipsychotics are associated with harms and can be safely tapered. Patients and caregivers might be more amenable to deprescribing if they understand the rationale (potential for harm), are involved in developing the tapering plan, and are offered behavioural advice or management. This guideline provides recommendations for making decisions about when and how to reduce the dose of or stop antipsychotics. Recommendations are meant to assist with, not dictate, decision making in conjunction with patients and families.", "title": "" }, { "docid": "c3a8bbd853667155eee4cfb74692bd0f", "text": "The contemporary approach to database system architecture requires the complete integration of data into a single, centralized database; while multiple logical databases can be supported by current database management software, techniques for relating these databases are strictly ad hoc. This problem is aggravated by the trend toward networks of small to medium size computer systems, as opposed to large, stand-alone main-frames. Moreover, while current research on distributed databases aims to provide techniques that support the physical distribution of data items in a computer network environment, current approaches require a distributed database to be logically centralized.", "title": "" }, { "docid": "dde5eb29c02f95cbf47bb9a3895d7fd8", "text": "Text password is the most popular form of user authentication on websites due to its convenience and simplicity. However, users' passwords are prone to be stolen and compromised under different threats and vulnerabilities. Firstly, users often select weak passwords and reuse the same passwords across different websites. Routinely reusing passwords causes a domino effect; when an adversary compromises one password, she will exploit it to gain access to more websites. Second, typing passwords into untrusted computers suffers password thief threat. An adversary can launch several password stealing attacks to snatch passwords, such as phishing, keyloggers and malware. In this paper, we design a user authentication protocol named oPass which leverages a user's cellphone and short message service to thwart password stealing and password reuse attacks. oPass only requires each participating website possesses a unique phone number, and involves a telecommunication service provider in registration and recovery phases. Through oPass, users only need to remember a long-term password for login on all websites. After evaluating the oPass prototype, we believe oPass is efficient and affordable compared with the conventional web authentication mechanisms.", "title": "" }, { "docid": "ce839ea9b5cc8de275b634c920f45329", "text": "As a matter of fact, most natural structures are complex topology structures with intricate holes or irregular surface morphology. These structures can be used as lightweight infill, porous scaffold, energy absorber or micro-reactor. With the rapid advancement of 3D printing, the complex topology structures can now be efficiently and accurately fabricated by stacking layered materials. The novel manufacturing technology and application background put forward new demands and challenges to the current design methodologies of complex topology structures. In this paper, a brief review on the development of recent complex topology structure design methods was provided; meanwhile, the limitations of existing methods and future work are also discussed in the end.", "title": "" }, { "docid": "97c40f796f104587a465f5d719653181", "text": "Although some theory suggests that it is impossible to increase one’s subjective well-being (SWB), our ‘sustainable happiness model’ (Lyubomirsky, Sheldon, & Schkade, 2005) specifies conditions under which this may be accomplished. To illustrate the three classes of predictor in the model, we first review research on the demographic/circumstantial, temperament/personality, and intentional/experiential correlates of SWB. We then introduce the sustainable happiness model, which suggests that changing one’s goals and activities in life is the best route to sustainable new SWB. However, the goals and activities must be of certain positive types, must fit one’s personality and needs, must be practiced diligently and successfully, must be varied in their timing and enactment, and must provide a continued stream of fresh positive experiences. Research supporting the model is reviewed, including new research suggesting that happiness intervention effects are not just placebo effects. Everyone wants to be happy. Indeed, happiness may be the ultimate fundamental ‘goal’ that people pursue in their lives (Diener, 2000), a pursuit enshrined as an inalienable right in the US Declaration of Independence. The question of what produces happiness and wellbeing is the subject of a great deal of contemporary research, much of it falling under the rubric of ‘positive psychology’, an emerging field that also considers issues such as what makes for optimal relationships, optimal group functioning, and optimal communities. In this article, we first review some prominent definitions, theories, and research findings in the well-being literature. We then focus in particular on the question of whether it is possible to become lastingly happier in one’s life, drawing from our recent model of sustainable happiness. Finally, we discuss some recent experimental data suggesting that it is indeed possible to boost one’s happiness level, and to sustain that newfound level. A number of possible definitions of happiness exist. Let us start with the three proposed by Ed Diener in his landmark Psychological Bulletin 130 Is It Possible to Become Happier © 2007 The Authors Social and Personality Psychology Compass 1/1 (2007): 129–145, 10.1111/j.1751-9004.2007.00002.x Journal Compilation © 2007 Blackwell Publishing Ltd (1984) article. The first is ‘leading a virtuous life’, in which the person adheres to society’s vision of morality and proper conduct. This definition makes no reference to the person’s feelings or emotions, instead apparently making the implicit assumption that reasonably positive feelings will ensue if the person toes the line. A second definition of happiness involves a cognitive evaluation of life as a whole. Are you content, overall, or would you do things differently given the opportunity? This reflects a personcentered view of happiness, and necessarily taps peoples’ subjective judgments of whether they are satisfied with their lives. A third definition refers to typical moods. Are you typically in a positive mood (i.e., inspired, pleased, excited) or a negative mood (i.e., anxious, upset, depressed)? In this person-centered view, it is the balance of positive to negative mood that matters (Bradburn, 1969). Although many other conceptions of well-being exist (Lyubomirsky & Lepper, 1999; Ryan & Frederick, 1997; Ryff & Singer, 1996), ratings of life satisfaction and judgments of the frequency of positive and negative affect have received the majority of the research attention, illustrating the dominance of the second and third (person-centered) definitions of happiness in the research literature. Notably, positive affect, negative affect, and life satisfaction are presumed to be somewhat distinct. Thus, although life satisfaction typically correlates positively with positive affect and negatively with negative affect, and positive affect typically correlates negatively with negative affect, these correlations are not necessarily strong (and they also vary depending on whether one assesses a particular time or context, or the person’s experience as a whole). The generally modest correlations among the three variables means that an individual high in one indicator is not necessarily high (or low) in any other indicator. For example, a person with many positive moods might also experience many negative moods, and a person with predominantly good moods may or may not be satisfied with his or her life. As a case in point, a college student who has many friends and rewarding social interactions may be experiencing frequent pleasant affect, but, if he doubts that college is the right choice for him, he will be discontent with life. In contrast, a person experiencing many negative moods might nevertheless be satisfied with her life, if she finds her life meaningful or is suffering for a good cause. For example, a frazzled new mother may feel that all her most cherished life goals are being realized, yet she is experiencing a great deal of negative emotions on a daily basis. Still, the three quantities typically go together to an extent such that a comprehensive and reliable subjective well-being (SWB) indicator can be computed by summing positive affect and life satisfaction and subtracting negative affect. Can we trust people’s self-reports of happiness (or unhappiness)? Actually, we must: It would make little sense to claim that a person is happy if he or she does not acknowledge being happy. Still, it is possible to corroborate self-reports of well-being with reports from the respondents’ friends and", "title": "" }, { "docid": "a960ced0cd3859c037c43790a6b8436b", "text": "Ferroresonance is a widely studied phenomenon but it is still not well understood because of its complex behavior. It is “fuzzy-resonance.” A simple graphical approach using fundamental frequency phasors has been presented to elevate the readers understanding. Its occurrence and how it appears is extremely sensitive to the transformer characteristics, system parameters, transient voltages and initial conditions. More efficient transformer core material has lead to its increased occurrence and it has considerable effects on system apparatus and protection. Power system engineers should strive to recognize potential ferroresonant configurations and design solutions to prevent its occurrence.", "title": "" }, { "docid": "9db388f2564a24f58d8ea185e5b514be", "text": "Analyzing large volumes of log events without some kind of classification is undoable nowadays due to the large amount of events. Using AI to classify events make these log events usable again. With the use of the Keras Deep Learning API, which supports many Optimizing Stochastic Gradient Decent algorithms, better known as optimizers, this research project tried these algorithms in a Long Short-Term Memory (LSTM) network, which is a variant of the Recurrent Neural Networks. These algorithms have been applied to classify and update event data stored in Elastic-Search. The LSTM network consists of five layers where the output layer is a Dense layer using the Softmax function for evaluating the AI model and making the predictions. The Categorical Cross-Entropy is the algorithm used to calculate the loss. For the same AI model, different optimizers have been used to measure the accuracy and the loss. Adam was found as the best choice with an accuracy of 29,8%.", "title": "" }, { "docid": "b0bb9c4bcf666dca927d4f747bfb1ca1", "text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.", "title": "" }, { "docid": "3601a56b6c68864da31ac5aaa67bff1a", "text": "Information asymmetry exists amongst stakeholders in the current food supply chain. Lack of standardization in data format, lack of regulations, and siloed, legacy information systems exasperate the problem. Global agriculture trade is increasing creating a greater need for traceability in the global supply chain. This paper introduces Harvest Network, a theoretical end-to-end, vis a vie “farm-to-fork”, food traceability application integrating the Ethereum blockchain and IoT devices exchanging GS1 message standards. The goal is to create a distributed ledger accessible for all stakeholders in the supply chain. Our design effort creates a basic framework (artefact) for building a prototype or simulation using existing technologies and protocols [1]. The next step is for industry practitioners and researchers to apply AGILE methods for creating working prototypes and advanced projects that bring about greater transparency.", "title": "" }, { "docid": "f92087a8e81c45cd8bedc12fddd682fc", "text": "This paper presented a novel power conversion method of realizing the galvanic isolation by dual safety capacitors (Y-cap) instead of conventional transformer. With limited capacitance of the Y capacitor, series resonant is proposed to achieve the power transfer. The basic concept is to control the power path impedance, which blocks the dominant low-frequency part of touch current and let the high-frequency power flow freely. Conceptual analysis, simulation and design considerations are mentioned in this paper. An 85W AC/AC prototype is designed and verified to substitute the isolation transformer of a CCFL LCD TV backlight system. Compared with the conventional transformer isolation, the new method is proved to meet the function and safety requirements of its specification while has higher efficiency and smaller size.", "title": "" }, { "docid": "5fde7006ec6f7cf4f945b234157e5791", "text": "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.", "title": "" }, { "docid": "2070b05100a92e883252c80666c3dde8", "text": "Visiting museums and exhibitions represented in multi-user 3D environments can be an efficient way of learning about the exhibits in an interactive manner and socialising with other visitors. The rich educational information presented in the virtual environment and the presence of remote users could also be beneficial for the visitors of the physical exhibition space. In this paper we present the design and implementation of a virtual exhibition that allowed local and remote visitors coexist in the environment, access the interactive content and communicate with each other. The virtual exhibition was accessible to the remote users from the Web and to local visitors through an installation in the physical space. The installation projected the virtual world in the exhibition environment and let users interact with it using a handheld gesture-based device. We performed an evaluation of the 3D environment with the participation of both local and remote visitors. The evaluation results indicate that the virtual world was considered exciting and easy to use by the majority of the participants. Furthermore, according to the evaluation results, virtual museums and exhibitions seem to have significant advantages for remote visitors compared to typical museum web sites, and they can also be an important aid to local visitors and enhance their experience.", "title": "" }, { "docid": "5b6d68984b4f9a6e0f94e0a68768dc8c", "text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF", "title": "" } ]
scidocsrr
571e0e024a5a6e993970ce2faf0b82f5
Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation
[ { "docid": "ee3b2a97f01920ccbc653f4833820ca0", "text": "Notwithstanding many years of progress, pedestrian recognition is still a difficult but important problem. We present a novel multilevel Mixture-of-Experts approach to combine information from multiple features and cues with the objective of improved pedestrian classification. On pose-level, shape cues based on Chamfer shape matching provide sample-dependent priors for a certain pedestrian view. On modality-level, we represent each data sample in terms of image intensity, (dense) depth, and (dense) flow. On feature-level, we consider histograms of oriented gradients (HOG) and local binary patterns (LBP). Multilayer perceptrons (MLP) and linear support vector machines (linSVM) are used as expert classifiers. Experiments are performed on a unique real-world multi-modality dataset captured from a moving vehicle in urban traffic. This dataset has been made public for research purposes. Our results show a significant performance boost of up to a factor of 42 in reduction of false positives at constant detection rates of our approach compared to a baseline intensity-only HOG/linSVM approach.", "title": "" }, { "docid": "6c7156d5613e1478daeb08eecb17c1e2", "text": "The idea behind the experiments in section 4.1 of the main paper is to demonstrate that, within a single framework, varying the features can replicate the jump in detection performance over a ten-year span (2004 2014), i.e. the jump in performance between VJ and the current state-of-the-art. See figure 1 for results on INRIA and Caltech-USA of the following methods (all based on SquaresChnFtrs, described in section 4 of the paper):", "title": "" } ]
[ { "docid": "01809d609802d949aa8c1604db29419d", "text": "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish finegrained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using 20% and 33% less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.", "title": "" }, { "docid": "04013595912b4176574fb81b38beade5", "text": "This chapter presents an overview of the current state of cognitive task analysis (CTA) in research and practice. CTA uses a variety of interview and observation strategies to capture a description of the explicit and implicit knowledge that experts use to perform complex tasks. The captured knowledge is most often transferred to training or the development of expert systems. The first section presents descriptions of a variety of CTA techniques, their common characteristics, and the typical strategies used to elicit knowledge from experts and other sources. The second section describes research on the impact of CTA and synthesizes a number of studies and reviews pertinent to issues underlying knowledge elicitation. In the third section, we discuss the integration of CTA with training design. Finally, in the fourth section, we present a number of recommendations for future research and conclude with general comments.", "title": "" }, { "docid": "c992c686e7e1b49127f6444a6adfa11e", "text": "Published version ATTWOOD, F. (2005). What do people do with porn? qualitative research into the consumption, use and experience of pornography and other sexually explicit media. Sexuality and culture, 9 (2), 65-86. one copy of any article(s) in SHURA to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profit-making activities or any commercial gain.", "title": "" }, { "docid": "fcfafe226a7ab72b5e18d524344400a3", "text": "This paper proposes several adjustments to the ISO 12233 slanted edge algorithm for estimating camera MTF. First, the Ridler-Calvard binary image segmentation method is used to find the line. Secondly, total least squares, rather than ordinary least squares, is used to compute the line parameters. Finally, the pixel values are projected in the reverse direction from the 1D array to the 2D image, rather than from the 2D image to the 1D array. Together, these changes yield an algorithm that exhibits significantly less variation than existing techniques when applied to real images. In particular, the proposed algorithm is largely invariant to the rotation angle of the edge as well as to the size of the image crop.", "title": "" }, { "docid": "4a96980dc1ba12b1ea822699a6505aed", "text": "Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.", "title": "" }, { "docid": "9aab4a607de019226e9465981b82f9b8", "text": "Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.", "title": "" }, { "docid": "819f6b62eb3f8f9d60437af28c657935", "text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.", "title": "" }, { "docid": "49663600aeff26af65fbfe39f2ed0161", "text": "Misuse cases and attack trees have been suggested for security requirements elicitation and threat modeling in software projects. Their use is believed to increase security awareness throughout the software development life cycle. Experiments have identified strengths and weaknesses of both model types. In this paper we present how misuse cases and attack trees can be linked to get a high-level view of the threats towards a system through misuse case diagrams and a more detailed view on each threat through attack trees. Further, we introduce links to security activity descriptions in the form of UML activity graphs. These can be used to describe mitigating security activities for each identified threat. The linking of different models makes most sense when security modeling is supported by tools, and we present the concept of a security repository that is being built to store models and relations such as those presented in this paper.", "title": "" }, { "docid": "503951e241d69d6ca21392807141ad45", "text": "The authors examined the efficacy, speed, and incidence of symptom worsening for 3 treatments of posttraumatic stress disorder (PTSD): prolonged exposure, relaxation training, or eye movement desensitization and reprocessing (EMDR; N = 60). Treaments did not differ in attrition, in the incidence of symptom worsening, or in their effects on numbing and hyperarousal symptoms. Compared with EMDR and relaxation training, exposure therapy (a) produced significantly larger reductions in avoidance and reexperiencing symptoms, (b) tended to be faster at reducing avoidance, and (c) tended to yield a greater proportion of participants who no longer met criteria for PTSD after treatment. EMDR and relaxation did not differ from one another in speed or efficacy.", "title": "" }, { "docid": "1065c331b4a9ae5209ee3f35e5a2041b", "text": "Recent acts of extreme violence involving teens and associated links to violent video games have led to an increased interest in video game violence. Research suggests that violent video games influence aggressive behavior, aggressive affect, aggressive cognition, and physiological arousal. Anderson and Bushman [Annu. Rev. Psychol. 53 (2002) 27.] have posited a General Aggression Model (GAM) to explain the mechanism behind the link between violent video games and aggressive behavior. However, the influence of violent video games as a function of developmental changes across adolescence has yet to be addressed. The purpose of this review is to integrate the GAM with developmental changes that occur across adolescence. D 2002 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "4a5f05a7aea8a02cf70d6c644e06dda0", "text": "Sales pipeline win-propensity prediction is fundamental to effective sales management. In contrast to using subjective human rating, we propose a modern machine learning paradigm to estimate the winpropensity of sales leads over time. A profile-specific two-dimensional Hawkes processes model is developed to capture the influence from seller’s activities on their leads to the win outcome, coupled with lead’s personalized profiles. It is motivated by two observations: i) sellers tend to frequently focus their selling activities and efforts on a few leads during a relatively short time. This is evidenced and reflected by their concentrated interactions with the pipeline, including login, browsing and updating the sales leads which are logged by the system; ii) the pending opportunity is prone to reach its win outcome shortly after such temporally concentrated interactions. Our model is deployed and in continual use to a large, global, B2B multinational technology enterprize (Fortune 500) with a case study. Due to the generality and flexibility of the model, it also enjoys the potential applicability to other real-world problems.", "title": "" }, { "docid": "3e1690ae4d61d87edb0e4c3ce40f6a88", "text": "Despite previous efforts in auditing software manually and automatically, buffer overruns are still being discovered in programs in use. A dynamic bounds checker detects buffer overruns in erroneous software before it occurs and thereby prevents attacks from corrupting the integrity of the system. Dynamic buffer overrun detectors have not been adopted widely because they either (1) cannot guard against all buffer overrun attacks, (2) break existing code, or (3) incur too high an overhead. This paper presents a practical detector called CRED (C Range Error Detector) that avoids each of these deficiencies. CRED finds all buffer overrun attacks as it directly checks for the bounds of memory accesses. Unlike the original referent-object based bounds-checking technique, CRED does not break existing code because it uses a novel solution to support program manipulation of out-of-bounds addresses. Finally, by restricting the bounds checks to strings in a program, CRED’s overhead is greatly reduced without sacrificing protection in the experiments we performed. CRED is implemented as an extension of the GNU C compiler version 3.3.1. The simplicity of our design makes possible a robust implementation that has been tested on over 20 open-source programs, comprising over 1.2 million lines of C code. CRED proved effective in detecting buffer overrun attacks on programs with known vulnerabilities, and is the only tool found to guard against a testbed of 20 different buffer overflow attacks[34]. Finding overruns only on strings impose an overhead of less This research was performed while the first author was at Stanford University, and this material is based upon work supported in part by the National Science Foundation under Grant No. 0086160. than 26% for 14 of the programs, and an overhead of up to 130% for the remaining six, while the previous state-ofthe-art bounds checker by Jones and Kelly breaks 60% of the programs and is 12 times slower. Incorporating wellknown techniques for optimizing bounds checking into CRED could lead to further performance improvements.", "title": "" }, { "docid": "b4d58813c09030e1c68b4fb573d45389", "text": "With the empirical evidence that Twitter influences the financial market, there is a need for a bottom-up approach focusing on individual Twitter users and their message propagation among a selected Twitter community with regard to the financial market. This paper presents an agent-based simulation framework to model the Twitter network growth and message propagation mechanism in the Twitter financial community. Using the data collected through the Twitter API, the model generates a dynamic community network with message propagation rates by different agent types. The model successfully validates against the empirical characteristics of the Twitter financial community in terms of network demographics and aggregated message propagation pattern. Simulation of the 2013 Associated Press hoax incident demonstrates that removing critical nodes of the network (users with top centrality) dampens the message propagation process linearly and critical node of the highest betweenness centrality has the optimal effect in reducing the spread of the malicious message to lesser ratio of the community.", "title": "" }, { "docid": "fb763a2142bd744cc61718939054747f", "text": "A new method of image transmission and cryptography on the basis of Mobius transformation is proposed in this paper. Based on the Mobius transformation, the method of modulation and demodulation in Chen-Mobius communication system, which is quite different from the traditional one, is applied in the image transmission and cryptography. To make such a processing, the Chen-Mobius inverse transformed functions act as the “modulation” waveforms and the receiving end is coherently “demodulated” by the often-used digital waveforms. Simulation results are discussed in some detail. It shows that the new application has excellent performances that the digital image signals can be restored from intense noise and encrypted ones.", "title": "" }, { "docid": "65946b75e84eaa86caf909d4c721a190", "text": "The Park Geun-hye Administration of Korea (2013–2017) aims to increase the level of transparency and citizen trust in government through the Government 3.0 initiative. This new initiative for public sector innovation encourages citizen-government collaboration and collective intelligence, thereby improving the quality of policy-making and implementation and solving public problems in a new way. However, the national initiative that identifies collective intelligence and citizen-government collaboration alike fails to understand what the wisdom of crowds genuinely means. Collective intelligence is not a magic bullet to solve public problems, which are called “wicked problems”. Collective deliberation over public issues often brings pain and patience, rather than fun and joy. It is not so easy that the public finds the best solution for soothing public problems through collective deliberation. The Government 3.0 initiative does not pay much attention to difficulties in gathering scattered wisdom, but rather highlights uncertain opportunities created by collective interactions and communications. This study deeply discusses the weaknesses in the logic of, and approach to, collective intelligence underlying the Government 3.0 initiative in Korea and the overall influence of the national initiative on participatory democracy.", "title": "" }, { "docid": "88976f137ea43b1be8d133ddc4124af2", "text": "Real-time stereo vision is attractive in many areas such as outdoor mapping and navigation. As a popular accelerator in the image processing field, GPU is widely used for the studies of the stereo vision algorithms. Recently, many stereo vision systems on GPU have achieved low error rate, as a result of the development of deep learning. However, their processing speed is normally far from the real-time requirement. In this paper, we propose a real-time stereo vision system on GPU for the high-resolution images. This system also maintains a low error rate compared with other fast systems. In our approach, the image is resized to reduce the computational complexity and to realize the real-time processing. The low error rate is kept by using the cost aggregation with multiple blocks, secondary matching and sub-pixel estimation. Its processing speed is 41 fps for $2888\\times 1920$ pixels images when the maximum disparity is 760.", "title": "" }, { "docid": "1238556dbcd297f363fb2116b7ffbab4", "text": "We describe an efficient method to produce objects comprising spatially controlled and graded cross-link densities using vat photopolymerization additive manufacturing (AM). Using a commercially available diacrylate-based photoresin, 3D printer, and digital light processing (DLP) projector, we projected grayscale images to print objects in which the varied light intensity was correlated to controlled cross-link densities and associated mechanical properties. Cylinder and bar test specimens were used to establish correlations between light intensities used for printing and cross-link density in the resulting specimens. Mechanical testing of octet truss unit cells in which the properties of the crossbars and vertices were independently modified revealed unique mechanical responses from the different compositions. From the various test geometries, we measured changes in mechanical properties such as increased strain-to-break in inhomogeneous structures in comparison with homogeneous variants.", "title": "" }, { "docid": "59ec5715b15e3811a0d9010709092d03", "text": "We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel “bag-of-words” representation, where each frame corresponds to a “word”. Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; secondly, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Secondly, it alleviates the issue of how to choose the appropriate number of latent topics. Thirdly, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different datasets. Our results are either comparable to, or significantly better than previous published results on these datasets. Index Terms —Human action recognition, video analysis, bag-of-words, probabilistic graphical models, event and activity understanding", "title": "" }, { "docid": "f44d3512cd8658f824b0ba0ea5a69e4a", "text": "Customer retention is a major issue for various service-based organizations particularly telecom industry, wherein predictive models for observing the behavior of customers are one of the great instruments in customer retention process and inferring the future behavior of the customers. However, the performances of predictive models are greatly affected when the real-world data set is highly imbalanced. A data set is called imbalanced if the samples size from one class is very much smaller or larger than the other classes. The most commonly used technique is over/under sampling for handling the class-imbalance problem (CIP) in various domains. In this paper, we survey six well-known sampling techniques and compare the performances of these key techniques, i.e., mega-trend diffusion function (MTDF), synthetic minority oversampling technique, adaptive synthetic sampling approach, couples top-N reverse k-nearest neighbor, majority weighted minority oversampling technique, and immune centroids oversampling technique. Moreover, this paper also reveals the evaluation of four rules-generation algorithms (the learning from example module, version 2 (LEM2), covering, exhaustive, and genetic algorithms) using publicly available data sets. The empirical results demonstrate that the overall predictive performance of MTDF and rules-generation based on genetic algorithms performed the best as compared with the rest of the evaluated oversampling methods and rule-generation algorithms.", "title": "" }, { "docid": "d3c11fc96110e1ab0b801a5ba81133e1", "text": "Two experiments comparing user performance on ClearType and Regular displays are reported. In the first, 26 participants scanned a series of spreadsheets for target information. Speed of performance was significantly faster with ClearType. In the second experiment, 25 users read two articles for meaning. Reading speed was significantly faster for ClearType. In both experiments no differences in accuracy of performance or visual fatigue scores were observed. The data also reveal substantial individual differences in performance suggesting ClearType may not be universally beneficial to information workers.", "title": "" } ]
scidocsrr
7b601e8f4a1f3fa0ad4b2b998a423e5f
Server-side object recognition and client-side object tracking for mobile augmented reality
[ { "docid": "98cc792a4fdc23819c877634489d7298", "text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.", "title": "" }, { "docid": "3982c66e695fdefe36d8d143247add88", "text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "title": "" } ]
[ { "docid": "4124a456822b84ab9d02148179a874ca", "text": "Successful endurance training involves the manipulation of training intensity, duration, and frequency, with the implicit goals of maximizing performance, minimizing risk of negative training outcomes, and timing peak fitness and performances to be achieved when they matter most. Numerous descriptive studies of the training characteristics of nationally or internationally competitive endurance athletes training 10 to 13 times per week seem to converge on a typical intensity distribution in which about 80% of training sessions are performed at low intensity (2 mM blood lactate), with about 20% dominated by periods of high-intensity work, such as interval training at approx. 90% VO2max. Endurance athletes appear to self-organize toward a high-volume training approach with careful application of high-intensity training incorporated throughout the training cycle. Training intensification studies performed on already well-trained athletes do not provide any convincing evidence that a greater emphasis on high-intensity interval training in this highly trained athlete population gives long-term performance gains. The predominance of low-intensity, long-duration training, in combination with fewer, highly intensive bouts may be complementary in terms of optimizing adaptive signaling and technical mastery at an acceptable level of stress.", "title": "" }, { "docid": "eac2100a0fa189aecc148b70e113a0b0", "text": "Zolt ́n Dörnyei Language Teaching / Volume 31 / Issue 03 / July 1998, pp 117 ­ 135 DOI: 10.1017/S026144480001315X, Published online: 12 June 2009 Link to this article: http://journals.cambridge.org/abstract_S026144480001315X How to cite this article: Zolt ́n Dörnyei (1998). Motivation in second and foreign language learning. Language Teaching, 31, pp 117­135 doi:10.1017/S026144480001315X Request Permissions : Click here", "title": "" }, { "docid": "bb4afc6c50df5b4d32e4ce539932b0bd", "text": "Traumatic brain injury (TBI) is a major health and socioeconomic problem that affects all societies. In recent years, patterns of injury have been changing, with more injuries, particularly contusions, occurring in older patients. Blast injuries have been identified as a novel entity with specific characteristics. Traditional approaches to the classification of clinical severity are the subject of debate owing to the widespread policy of early sedation and ventilation in more severely injured patients, and are being supplemented with structural and functional neuroimaging. Basic science research has greatly advanced our knowledge of the mechanisms involved in secondary damage, creating opportunities for medical intervention and targeted therapies; however, translating this research into patient benefit remains a challenge. Clinical management has become much more structured and evidence based since the publication of guidelines covering many aspects of care. In this Review, we summarise new developments and current knowledge and controversies, focusing on moderate and severe TBI in adults. Suggestions are provided for the way forward, with an emphasis on epidemiological monitoring, trauma organisation, and approaches to management.", "title": "" }, { "docid": "58de6d2acc6287bd7b82e3ecc3d602b4", "text": "In the today‟s world, security is required to transmit confidential information over the network. Security is also demanding in wide range of applications. Cryptographic algorithms play a vital role in providing the data security against malicious attacks.RSA algorithm is extensively used in the popular implementations of Public Key Infrastructures. In asymmetric key cryptography, also called Public Key cryptography, two different keys (which form a key pair) are used. One key is used for encryption & only the other corresponding key must be used for decryption. No other key can decrypt the message – not even the original (i.e. the first) key used for encryption. The beauty of this scheme is that every communicating party needs just a key pair for communicating with any number of other communicating parties. Once someone obtains a key pair, he /she can communicate with anyone else. In this paper, we have done an efficient implementation of RSA algorithm using two public key pairs and using some mathematical logic rather than sending the e value directly as a public key.Because if an attacker has opportunity of getting the e value they can directly find d value and decrypt the message.", "title": "" }, { "docid": "84d8ff8724df86ce100ddfbb150e7446", "text": "Adaptive Gaussian mixtures have been used for modeling nonstationary temporal distributions of pixels in video surveillance applications. However, a common problem for this approach is balancing between model convergence speed and stability. This paper proposes an effective scheme to improve the convergence rate without compromising model stability. This is achieved by replacing the global, static retention factor with an adaptive learning rate calculated for each Gaussian at every frame. Significant improvements are shown on both synthetic and real video data. Incorporating this algorithm into a statistical framework for background subtraction leads to an improved segmentation performance compared to a standard method.", "title": "" }, { "docid": "d6e178e87601b2a7d442b97e42c34350", "text": "BACKGROUND\nNo systematic review and narrative synthesis on personal recovery in mental illness has been undertaken.\n\n\nAIMS\nTo synthesise published descriptions and models of personal recovery into an empirically based conceptual framework.\n\n\nMETHOD\nSystematic review and modified narrative synthesis.\n\n\nRESULTS\nOut of 5208 papers that were identified and 366 that were reviewed, a total of 97 papers were included in this review. The emergent conceptual framework consists of: (a) 13 characteristics of the recovery journey; (b) five recovery processes comprising: connectedness; hope and optimism about the future; identity; meaning in life; and empowerment (giving the acronym CHIME); and (c) recovery stage descriptions which mapped onto the transtheoretical model of change. Studies that focused on recovery for individuals of Black and minority ethnic (BME) origin showed a greater emphasis on spirituality and stigma and also identified two additional themes: culturally specific facilitating factors and collectivist notions of recovery.\n\n\nCONCLUSIONS\nThe conceptual framework is a theoretically defensible and robust synthesis of people's experiences of recovery in mental illness. This provides an empirical basis for future recovery-oriented research and practice.", "title": "" }, { "docid": "63a5292e2314ffc9167ec4a9be1e1427", "text": "Distributed Artificial Intelligence (DAI) has existed as a s ubfield of AI for less than two decades. DAI is concerned with systems that consist of multiple indep endent entities that interact in a domain. Traditionally, DAI has been divided into two sub-disciplin es: Distributed Problem Solving (DPS) focuses on the information management aspects of systems with sever al branches working together towards a common goal; Multiagent Systems (MAS) deals with behavior m anagement in collections of several independent entities, or agents. This survey of MAS is inten ded to serve as an introduction to the field and as an organizational framework. A series of general mult iagent scenarios are presented. For each scenario, the issues that arise are described along with a sa mpling of the techniques that exist to deal with them. The presented techniques are not exhaustive, but they highlight how multiagent systems can be and have been used to build complex systems. When options exi st, the techniques presented are biased towards machine learning approaches. Additional opportun ities for applying machine learning to MAS are highlighted and robotic soccer is presented as an approp riate test-bed for MAS. This survey does not focus exclusively on robotic systems since much of the prior research in non-robotic MAS applies to robotic systems as well. However, several robotic MAS, incl uding all of those presented in this issue, are discussed.", "title": "" }, { "docid": "59344cfe759a89a68e7bc4b0a5c971b1", "text": "A non-linear support vector machine (NLSVM) seizure classification SoC with 8-channel EEG data acquisition and storage for epileptic patients is presented. The proposed SoC is the first work in literature that integrates a feature extraction (FE) engine, patient specific hardware-efficient NLSVM classification engine, 96 KB SRAM for EEG data storage and low-noise, high dynamic range readout circuits. To achieve on-chip integration of the NLSVM classification engine with minimum area and energy consumption, the FE engine utilizes time division multiplexing (TDM)-BPF architecture. The implemented log-linear Gaussian basis function (LL-GBF) NLSVM classifier exploits the linearization to achieve energy consumption of 0.39 μ J/operation and reduces the area by 28.2% compared to conventional GBF implementation. The readout circuits incorporate a chopper-stabilized DC servo loop to minimize the noise level elevation and achieve noise RTI of 0.81 μ Vrms for 0.5-100 Hz bandwidth with an NEF of 4.0. The 5 × 5 mm (2) SoC is implemented in a 0.18 μm 1P6M CMOS process consuming 1.83 μ J/classification for 8-channel operation. SoC verification has been done with the Children's Hospital Boston-MIT EEG database, as well as with a specific rapid eye-blink pattern detection test, which results in an average detection rate, average false alarm rate and latency of 95.1%, 0.94% (0.27 false alarms/hour) and 2 s, respectively.", "title": "" }, { "docid": "b6a68089a65d3fb183be256fd72b8720", "text": "Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.", "title": "" }, { "docid": "75d76315376a1770c4be06d420a0bf96", "text": "Motor vehicles greatly influence human life but are also a major cause of death and road congestion, which is an obstacle to future economic development. We believe that by learning driving patterns, useful navigation support can be provided for drivers. In this paper, we present a simple and reliable method for the recognition of driving events using hidden Markov models (HMMs), popular stochastic tools for studying time series data. A data acquisition system was used to collect longitudinal and lateral acceleration and speed data from a real vehicle in a normal driving environment. Data were filtered, normalized, segmented, and quantified to obtain the symbolic representation necessary for use with discrete HMMs. Observation sequences for training and evaluation were manually selected and classified as events of a particular type. An appropriate model size was selected, and the model was trained for each type of driving events. Observation sequences from the training set were evaluated by multiple models, and the highest probability decides what kind of driving event this sequence represents. The recognition results showed that HMMs could recognize driving events very accurately and reliably.", "title": "" }, { "docid": "68cb8836a07846d19118d21383f6361a", "text": "Background: Dental rehabilitation of partially or totally edentulous patients with oral implants has become a routine treatment modality in the last decades, with reliable long-term results. However, unfavorable local conditions of the alveolar ridge, due to atrophy, periodontal disease, and trauma sequelae may provide insufficient bone volume or unfavorable vertical, horizontal, and sagittal intermaxillary relationships, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint. The aim of the current review is to discuss the different strategies for reconstruction of the alveolar ridge defect for implant placement. Study design: The study design includes a literature review of the articles that address the association between Reconstruction of Mandibular Alveolar Ridge Defects and Implant Placement. Results: Yet, despite an increasing number of publications related to the correction of deficient alveolar ridges, much controversy still exists concerning which is the more suitable and reliable technique. This is often because the publications are of insufficient methodological quality (inadequate sample size, lack of well-defined exclusion and inclusion criteria, insufficient follow-up, lack of well-defined success criteria, etc.). Conclusion: On the basis of available data it is difficult to conclude that a particular surgical procedure offered better outcome as compared to another. Hence the practical use of the available bone augmentation procedures for dental implants depends on the clinician’s preference in general and the clinical findings in the patient in particular. Surgical techniques that reduce trauma, preserve and augment the alveolar ridge represent key areas in the goal to optimize implant results.", "title": "" }, { "docid": "f76194dbaf302eccadf84cb8787d7096", "text": "We compare the restorative effects on cognitive functioning of interactions with natural versus urban environments. Attention restoration theory (ART) provides an analysis of the kinds of environments that lead to improvements in directed-attention abilities. Nature, which is filled with intriguing stimuli, modestly grabs attention in a bottom-up fashion, allowing top-down directed-attention abilities a chance to replenish. Unlike natural environments, urban environments are filled with stimulation that captures attention dramatically and additionally requires directed attention (e.g., to avoid being hit by a car), making them less restorative. We present two experiments that show that walking in nature or viewing pictures of nature can improve directed-attention abilities as measured with a backwards digit-span task and the Attention Network Task, thus validating attention restoration theory.", "title": "" }, { "docid": "27a583d33644887ad126e8e4844dd2e3", "text": "In this work, we will explore different approaches used in Cross-Lingual Information Retrieval (CLIR) systems. Mainly, CLIR systems which use statistical machine translation (SMT) systems to translate queries into collection language. This will include using SMT systems as a black box or as a white box, also the SMT systems that are tuned towards better CLIR performance. After that, we will present our approach to rerank the alternative translations using machine learning regression model. This includes also introducing our set of features which we used to train the model. After that, we adapt this reranker for new languages. We also present our query expansion approach using word-embeddings model that is trained on medical data. Finally we reinvestigate translating the document collection into query language, then we present our future work.", "title": "" }, { "docid": "419499ced8902a00909c32db352ea7f5", "text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.", "title": "" }, { "docid": "a4a56e0647849c22b48e7e5dc3f3049b", "text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process", "title": "" }, { "docid": "e18151d3d45015fcd946d6a516999e62", "text": "Knowledge graphs have become a fundamental asset for search engines. A fair amount of user queries seek information on problem-solving tasks such as building a fence or repairing a bicycle. However, knowledge graphs completely lack this kind of how-to knowledge. This paper presents a method for automatically constructing a formal knowledge base on tasks and task-solving steps, by tapping the contents of online communities such as WikiHow. We employ Open-IE techniques to extract noisy candidates for tasks, steps and the required tools and other items. For cleaning and properly organizing this data, we devise embedding-based clustering techniques. The resulting knowledge base, HowToKB, includes a hierarchical taxonomy of disambiguated tasks, temporal orders of sub-tasks, and attributes for involved items. A comprehensive evaluation of HowToKB shows high accuracy. As an extrinsic use case, we evaluate automatically searching related YouTube videos for HowToKB tasks.", "title": "" }, { "docid": "d4d24bee47b97e1bf4aadad0f3993e78", "text": "An aircraft landed safely is the result of a huge organizational effort required to cope with a complex system made up of humans, technology and the environment. The aviation safety record has improved dramatically over the years to reach an unprecedented low in terms of accidents per million take-offs, without ever achieving the “zero accident” target. The introduction of automation on board airplanes must be acknowledged as one of the driving forces behind the decline in the accident rate down to the current level.", "title": "" }, { "docid": "96607113a8b6d0ca1c043d183420996b", "text": "Primary retroperitoneal masses include a diverse, and often rare, group of neoplastic and non-neoplastic entities that arise within the retroperitoneum but do not originate from any retroperitoneal organ. Their overlapping appearances on cross-sectional imaging may pose a diagnostic challenge to the radiologist; familiarity with characteristic imaging features, together with relevant clinical information, helps to narrow the differential diagnosis. In this article, a systematic approach to identifying and classifying primary retroperitoneal masses is described. The normal anatomy of the retroperitoneum is reviewed with an emphasis on fascial planes, retroperitoneal compartments, and their contents using cross-sectional imaging. Specific radiologic signs to accurately identify an intra-abdominal mass as primary retroperitoneal are presented, first by confirming the location as retroperitoneal and secondly by excluding an organ of origin. A differential diagnosis based on a predominantly solid or cystic appearance, including neoplastic and non-neoplastic entities, is elaborated. Finally, key diagnostic clues based on characteristic imaging findings are described, which help to narrow the differential diagnosis. This article provides a comprehensive overview of the cross-sectional imaging features of primary retroperitoneal masses, including normal retroperitoneal anatomy, radiologic signs of retroperitoneal masses and the differential diagnosis of solid and cystic, neoplastic and non-neoplastic retroperitoneal masses, with a view to assist the radiologist in narrowing the differential diagnosis.", "title": "" }, { "docid": "cf95d41dc5a2bcc31b691c04e3fb8b96", "text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.", "title": "" }, { "docid": "dba73424d6215af4a696765ddf03c09d", "text": "We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixels of teh images. As the pixels near the edge of an image contribute to the fewest convolutional lter outputs, the model may see it t to tailor its few convolutional lters to better model the edge pixels. This is undesirable becaue it usually comes at the expense of a good model for the interior parts of the image. We investigate several ways of dealing with the edge pixels when training a convolutional DBN. Using a combination of locally-connected convolutional units and globally-connected units, as well as a few tricks to reduce the e ects of over tting, we achieve state-of-the-art performance in the classi cation task of the CIFAR-10 subset of the tiny images dataset.", "title": "" } ]
scidocsrr
e1fa522b8efde8d421969f7fee55a2f4
Online and Incremental Appearance-based SLAM in Highly Dynamic Environments
[ { "docid": "beb22339057840dc9a7876a871d242cf", "text": "We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.", "title": "" }, { "docid": "3982c66e695fdefe36d8d143247add88", "text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "title": "" } ]
[ { "docid": "645f4db902246c01476ae941004bcd94", "text": "The Internet of Things is part of our everyday life, which applies to all aspects of human life; from smart phones and environmental sensors to smart devices used in the industry. Although the Internet of Things has many advantages, there are risks and dangers as well that need to be addressed. The information used and transmitted on Internet of Things contain important info about the daily lives of people, banking information, location and geographical information, environmental and medical information, together with many other sensitive data. Therefore, it is critical to identify and address the security issues and challenges of Internet of Things. In this article, considering the broad scope of this field and its literature, we are going to express some comprehensive information on security challenges of the Internet of Things.", "title": "" }, { "docid": "da86c72fff98d51d4d78ece7516664fe", "text": "OBJECTIVE\nThe purpose of this study was to establish an Indian reference for normal fetal nasal bone length at 16-26 weeks of gestation.\n\n\nMETHODS\nThe fetal nasal bone was measured by ultrasound in 2,962 pregnant women at 16-26 weeks of gestation from 2004 to 2009 by a single operator, who performed three measurements for each woman when the fetus was in the midsagittal plane and the nasal bone was between a 45 and 135° angle to the ultrasound beam. All neonates were examined after delivery to confirm the absence of congenital abnormalities.\n\n\nRESULTS\nThe median nasal bone length increased with gestational age from 3.3 mm at 16 weeks to 6.65 mm at 26 weeks in a linear relationship. The fifth percentile nasal bone lengths were 2.37, 2.4, 2.8, 3.5, 3.6, 3.9, 4.3, 4.6, 4.68, 4.54, and 4.91 mm at 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, and 26 weeks, respectively.\n\n\nCONCLUSIONS\nWe have established the nasal bone length in South Indian fetuses at 16-26 weeks of gestation and there is progressive increase in the fifth percentile of nasal bone length with advancing gestational age. Hence, gestational age should be considered while defining hypoplasia of the nasal bone.", "title": "" }, { "docid": "06abf2a7c6d0c25cfe54422268300e58", "text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.", "title": "" }, { "docid": "c6ec311353b0872bcc1dfd09abb7632e", "text": "Deep neural network algorithms are difficult to analyze because they lack structure allowing to understand the properties of underlying transforms and invariants. Multiscale hierarchical convolutional networks are structured deep convolutional networks where layers are indexed by progressively higher dimensional attributes, which are learned from training data. Each new layer is computed with multidimensional convolutions along spatial and attribute variables. We introduce an efficient implementation of such networks where the dimensionality is progressively reduced by averaging intermediate layers along attribute indices. Hierarchical networks are tested on CIFAR image data bases where they obtain comparable precisions to state of the art networks, with much fewer parameters. We study some properties of the attributes learned from these databases.", "title": "" }, { "docid": "eb34879a227b5e3e2374bbb5a85a2c08", "text": "According to the Taiwan Ministry of Education statistics, about one million graduates each year, some of them will go to countries, high schools or tertiary institutions to continue to attend, and some will be ready to enter the workplace employment. During the course of study, the students' all kinds of excellent performance certificates, score transcripts, diplomas, etc., will become an important reference for admitting new schools or new works. As schools make various awards or diplomas, only the names of the schools and the students are input. Due to the lack of effective anti-forge mechanism, events that cause the graduation certificate to be forged often get noticed. In order to solve the problem of counterfeiting certificates, the digital certificate system based on blockchain technology would be proposed. By the unmodifiable property of blockchain, the digital certificate with anti-counterfeit and verifiability could be made. The procedure of issuing the digital certificate in this system is as follows. First, generate the electronic file of a paper certificate accompanying other related data into the database, meanwhile calculate the electronic file for its hash value. Finally, store the hash value into the block in the chain system. The system will create a related QR-code and inquiry string code to affix to the paper certificate. It will provide the demand unit to verify the authenticity of the paper certificate through mobile phone scanning or website inquiries. Through the unmodifiable properties of the blockchain, the system not only enhances the credibility of various paper-based certificates, but also electronically reduces the loss risks of various types of certificates.", "title": "" }, { "docid": "7e208f65cf33a910cc958ec57bdff262", "text": "This study proposed to address a new method that could select subsets more efficiently. In addition, the reasons why employers voluntarily turnover were also investigated in order to increase the classification accuracy and to help managers to prevent employers’ turnover. The mixed subset selection used in this study combined Taguchi method and Nearest Neighbor Classification Rules to select subset and analyze the factors to find the best predictor of employer turnover. All the samples used in this study were from industry A, in which the employers left their job during 1st of February, 2001 to 31st of December, 2007, compared with those incumbents. The results showed that through the mixed subset selection method, total 18 factors were found that are important to the employers. In addition, the accuracy of correct selection was 87.85% which was higher than before using this subset selection (80.93%). The new subset selection method addressed in this study does not only provide industries to understand the reasons of employers’ turnover, but also could be a long-term classification prediction for industries. Key-Words: Voluntary Turnover; Subset Selection; Taguchi Methods; Nearest Neighbor Classification Rules; Training pattern", "title": "" }, { "docid": "20c5dfcc5dec2efd1345de1d863bb346", "text": "An important task of public health officials is to keep track of spreading epidemics, and the locations and speed with which they appear. Furthermore, there is interest in understanding how concerned the population is about a disease outbreak. Twitter can serve as an important data source to provide this information in real time. In this paper, we focus on sentiment classification of Twitter messages to measure the Degree of Concern (DOC) of the Twitter users. In order to achieve this goal, we develop a novel two-step sentiment classification workflow to automatically identify personal tweets and negative tweets. Based on this workflow, we present an Epidemic Sentiment Monitoring System (ESMOS) that provides tools for visualizing Twitter users' concern towards different diseases. The visual concern map and chart in ESMOS can help public health officials to identify the progression and peaks of concern for a disease in space and time, so that appropriate preventive actions can be taken. The DOC measure is based on the sentiment-based classifications. We compare clue-based and different Machine Learning methods to classify sentiments of Twitter users regarding diseases, first into personal and neutral tweets and then into negative from neutral personal tweets. In our experiments, Multinomial Naïve Bayes achieved overall the best results and took significantly less time to build the classifier than other methods.", "title": "" }, { "docid": "2fa5646f8a29de75b476add775ac679f", "text": "(ABSTRACT) As traditional control schemes, open-loop Hysteresis and closed-loop pulse-width-modulation (PWM) have been used for the switched reluctance motor (SRM) current controller. The Hysteresis controller induces large unpleasant audible noises because it needs to vary the switching frequency to maintain constant Hysteresis current band. In contract, the PWM controller is very quiet but difficult to design proper gains and control bandwidth due to the nonlinear nature of the SRM. In this thesis, the ac small signal modeling technique is proposed for linearization of the SRM model such that a conventional PI controller can be designed accordingly for the PWM current controller. With the linearized SRM model, the duty-cycle to output transfer function can be derived, and the controller can be designed with sufficient stability margins. The proposed PWM controller has been simulated to compare the performance against the conventional Hysteresis controller based system. It was found that through the frequency spectrum analysis, the noise spectra in audible range disappeared with the fixed switching frequency PWM controller, but was pronounced with the conventional Hysteresis controller. A hardware prototype is then implemented with digital signal processor to verify the quiet nature of the PWM controller when running at 20 kHz switching frequency. The experimental results also indicate a stable current loop operation.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "cca9b3cb4a0d6fb8a690f2243cf7abce", "text": "In this paper, we propose to predict immediacy for interacting persons from still images. A complete immediacy set includes interactions, relative distance, body leaning direction and standing orientation. These measures are found to be related to the attitude, social relationship, social interaction, action, nationality, and religion of the communicators. A large-scale dataset with 10,000 images is constructed, in which all the immediacy measures and the human poses are annotated. We propose a rich set of immediacy representations that help to predict immediacy from imperfect 1-person and 2-person pose estimation results. A multi-task deep recurrent neural network is constructed to take the proposed rich immediacy representation as input and learn the complex relationship among immediacy predictions multiple steps of refinement. The effectiveness of the proposed approach is proved through extensive experiments on the large scale dataset.", "title": "" }, { "docid": "bd37aa47cf495c7ea327caf2247d28e4", "text": "The purpose of this study is to identify the negative effects of social network sites such as Facebook among Asia Pacific University scholars. The researcher, distributed 152 surveys to students of the chosen university to examine and study the negative effects. Electronic communication is emotionally gratifying but how do such technological distraction impact on academic performance? Because of social media platform’s widespread adoption by university students, there is an interest in how Facebook is related to academic performance. This paper measure frequency of use, participation in activities and time spent preparing for class, in order to know if Facebook affects the performance of students. Moreover, the impact of social network site on academic performance also raised another major concern which is health. Today social network sites are running the future and carrier of students. Social network sites were only an electronic connection between users, but unfortunately it has become an addiction for students. This paper examines the relationship between social network sites and health threat. Lastly, the paper provides a comprehensive analysis of the law and privacy of Facebook. It shows how Facebook users socialize on the site, while they are not aware or misunderstand the risk involved and how their privacy suffers as a result.", "title": "" }, { "docid": "55e9346ae7bcdac1de999534de34eca5", "text": "Semantic computing and enterprise Linked Data have recently gained traction in enterprises. Although the concept of Enterprise Knowledge Graphs (EKGs) has meanwhile received some attention, a formal conceptual framework for designing such graphs has not yet been developed. By EKG we refer to a semantic network of concepts, properties, individuals and links representing and referencing foundational and domain knowledge relevant for an enterprise. Through the efforts reported in this paper, we aim to bridge the gap between the increasing need for EKGs and the lack of formal methods for realising them. We present a thorough study of the key concepts of knowledge graphs design along with an analysis of the advantages and disadvantages of various design decisions. In particular, we distinguish between two polar approaches towards data fusion, i.e., the unified and the federated approach, describe their benefits and point out shortages.", "title": "" }, { "docid": "457e2f2583a94bf8b6f7cecbd08d7b34", "text": "We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.", "title": "" }, { "docid": "56d6528588a70de9a0dd19bbe5c3e896", "text": "We are concerned with learning models that generalize well to different unseen domains. We consider a worst-case formulation over data distributions that are near the source domain in the feature space. Only using training data from a single source distribution, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is \"hard\" under the current model. We show that our iterative scheme is an adaptive data augmentation method where we append adversarial examples at each iteration. For softmax losses, we show that our method is a data-dependent regularization scheme that behaves differently from classical regularizers that regularize towards zero (e.g., ridge or lasso). On digit recognition and semantic segmentation tasks, our method learns models improve performance across a range of a priori unknown target domains.", "title": "" }, { "docid": "28a6111c13e9554bf32533f13e56e92b", "text": "OBJECTIVES\nTo better categorize the epidemiologic profile, clinical features, and disease associations of loose anagen hair syndrome (LAHS) compared with other forms of childhood alopecia.\n\n\nDESIGN\nRetrospective survey.\n\n\nSETTING\nAcademic pediatric dermatology practice. Patients Three hundred seventy-four patients with alopecia referred from July 1, 1997, to June 31, 2007.\n\n\nMAIN OUTCOME MEASURES\nEpidemiologic data for all forms of alopecia were ascertained, such as sex, age at onset, age at the time of evaluation, and clinical diagnosis. Patients with LAHS were further studied by the recording of family history, disease associations, hair-pull test or biopsy results, hair color, laboratory test result abnormalities, initial treatment, and involvement of eyelashes, eyebrows, and nails.\n\n\nRESULTS\nApproximately 10% of all children with alopecia had LAHS. The mean age (95% confidence interval) at onset differed between patients with LAHS (2.8 [1.2-4.3] years) vs patients without LAHS (7.1 [6.6-7.7] years) (P < .001), with 3 years being the most common age at onset for patients with LAHS. All but 1 of 37 patients with LAHS were female. The most common symptom reported was thin, sparse hair. Family histories were significant for LAHS (n = 1) and for alopecia areata (n = 3). In 32 of 33 patients, trichograms showed typical loose anagen hairs. Two children had underlying genetic syndromes. No associated laboratory test result abnormalities were noted among patients who underwent testing.\n\n\nCONCLUSIONS\nLoose anagen hair syndrome is a common nonscarring alopecia in young girls with a history of sparse or fine hair. Before ordering extensive blood testing in young girls with diffusely thin hair, it is important to perform a hair-pull test, as a trichogram can be instrumental in the confirmation of a diagnosis of LAHS.", "title": "" }, { "docid": "f2521fbfd566fcf31b5810695e748ba0", "text": "A facile approach for coating red fluoride phosphors with a moisture-resistant alkyl phosphate layer with a thickness of 50-100 nm is reported. K2 SiF6 :Mn(4+) particles were prepared by co-precipitation and then coated by esterification of P2 O5 with alcohols (methanol, ethanol, and isopropanol). This route was adopted to encapsulate the prepared phosphors using transition-metal ions as cross-linkers between the alkyl phosphate moieties. The coated phosphor particles exhibited a high water tolerance and retained approximately 87 % of their initial external quantum efficiency after aging under high-humidity (85 %) and high-temperature (85 °C) conditions for one month. Warm white-light-emitting diodes that consisted of blue InGaN chips, the prepared K2 SiF6 :Mn(4+) phosphors, and either yellow Y3 Al5 O12 :Ce(3+) phosphors or green β-SiAlON: Eu(2+) phosphors showed excellent color rendition.", "title": "" }, { "docid": "bd7841688d039371f85d34f982130105", "text": "Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following.", "title": "" }, { "docid": "1b85bf53970400f6005623382f29ce60", "text": "An approach of rapidly computing the projective width of lanes is presented to predict the projective positions and widths of lanes. The Lane Marking Extraction Finite State Machine is designed to extract points with features of lane markings in the image, and a cubic B-spline is adopted to conduct curve fitting to reconstruct road geometry. A statistical search algorithm is also proposed to correctly and adaptively determine thresholds under various kinds of illumination conditions. Furthermore, the parameters of the camera in a moving car may change with the vibration, so a dynamic calibration algorithm is applied to calibrate camera parameters and lane widths with the information of lane projection. Moreover, a fuzzy logic is applied to determine the situation of occlusion. Finally, a region-of-interest determination strategy is developed to reduce the search region and to make the detection more robust with respect to the occlusion on the lane markings or complicated changes of curves and road boundaries.", "title": "" }, { "docid": "3bca1dd8dc1326693f5ebbe0eaf10183", "text": "This paper presents a novel multi-way multi-stage power divider design method based on the theory of small reflections. Firstly, the application of the theory of small reflections is extended from transmission line to microwave network. Secondly, an explicit closed-form analytical formula of the input reflection coefficient, which consists of the scattering parameters of power divider elements and the lengths of interconnection lines between each element, is derived. Thirdly, the proposed formula is applied to determine the lengths of interconnection lines. A prototype of a 16-way 4-stage power divider working at 4 GHz is designed and fabricated. Both the simulation and measurement results demonstrate the validity of the proposed method.", "title": "" }, { "docid": "ab927f80c37446fd649cd75f9bc15c1c", "text": "In this work, we ask the following question: Can visual analogies, learned in an unsupervised way, be used in order to transfer knowledge between pairs of games and even play one game using an agent trained for another game? We attempt to answer this research question by creating visual analogies between a pair of games: a source game and a target game. For example, given a video frame in the target game, we map it to an analogous state in the source game and then attempt to play using a trained policy learned for the source game. We demonstrate convincing visual mapping between four pairs of games (eight mappings), which are used to evaluate three transfer learning approaches.", "title": "" } ]
scidocsrr
17d0df26692717974a5e2f582b0b5980
On the relationship between parametric and geometric active contours
[ { "docid": "f3c2663cb0341576d754bb6cd5f2c0f5", "text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.", "title": "" }, { "docid": "08bef09a01414bafcbc778fea85a7c0a", "text": "The use.of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.", "title": "" } ]
[ { "docid": "8c308305b4a04934126c4746c8333b52", "text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.", "title": "" }, { "docid": "3a316147ed62da5fc8d9d2acbb745a58", "text": "Today, modern high-end cars have close to 100 electronic control units (ECUs) that are used to implement a variety of applications ranging from safety-critical control to driver assistance and comfort-related functionalities. The total sum of these applications is several million lines of software code. The ECUs are connected to different sensors and actuators and communicate via a variety of communication buses like CAN, FlexRay and now also Ethernet. In the case of electric vehicles, both the amount and the importance of such electronics and software are even higher. Here, a number of hydraulic or pneumatic controls are replaced by corresponding software-implemented controllers in order to reduce the overall weight of the car and hence to improve its driving range. Until recently, most of the software and system design in the automotive domain -- as in many other domains -- relied on an always correctly functioning or a zero-defect hardware implementation platform. However, as the device geometries of integrated circuits continue to shrink, this assumption is increasingly not true. Incorporating large safety margins in the design process results in very pessimistic design and expensive processors. Further, the processors in cars -- in contrast to those in many consumer electronics devices like mobile phones -- are exposed to harsh environments, extreme temperature variations, and often, strong electromagnetic fields. Hence, their reliability is even more questionable and must be explicitly accounted for in all layers of design abstraction -- starting from circuit design to architecture design, to software design and runtime management and monitoring. In this paper we outline some of these issues, currently followed practices, and the challenges that lie ahead of us in the automotive and electric vehicles domain.", "title": "" }, { "docid": "126b52ab2e2585eabf3345ef7fb39c51", "text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.", "title": "" }, { "docid": "1841d05590d1173711a2d47824a979cc", "text": "Heater plates or sheets that are visibly transparent have many interesting applications in optoelectronic devices such as displays, as well as in defrosting, defogging, gas sensing and point-of-care disposable devices. In recent years, there have been many advances in this area with the advent of next generation transparent conducting electrodes (TCE) based on a wide range of materials such as oxide nanoparticles, CNTs, graphene, metal nanowires, metal meshes and their hybrids. The challenge has been to obtain uniform and stable temperature distribution over large areas, fast heating and cooling rates at low enough input power yet not sacrificing the visible transmittance. This review provides topical coverage of this important research field paying due attention to all the issues mentioned above.", "title": "" }, { "docid": "b1fc36c696cd492e96c43238b79e7799", "text": "Survivin is one of the most cancer-specific proteins identified to date, being upregulated in almost all human tumors. Biologically, survivin has been shown to inhibit apoptosis, enhance proliferation and promote angiogenesis. Because of its upregulation in malignancy and its key role in apoptosis, proliferation and angiogenesis, survivin is currently attracting considerable attention as a new target for anti-cancer therapies. In several animal model systems, downregulation of survivin or inactivation of its function has been shown to inhibit tumor growth. Strategies under investigation to target survivin include antisense oligonucleotides, siRNA, ribozymes, immunotherapy and small molecular weight molecules. The translation of these findings to the clinic is currently ongoing with a number of phase I/II clinical trials targeting survivin in progress. These include use of the antisense oligonucleotide LY2181308, the low molecular weight molecule inhibitor YM155 and survivin-directed autologous cytotoxic T lymphocytes. The optimum use of survivin antagonists in the treatment of cancer is likely to be in combination with conventional cancer therapies.", "title": "" }, { "docid": "6971aec0fa734dec72100109a77095ef", "text": "We propose a novel neural architecture, Transformer-XL, for modeling longerterm dependency. To address the limitation of fixed-length contexts, we introduce a notion of recurrence by reusing the representations from the history. Empirically, we show state-of-the-art (SoTA) results on both word-level and character-level language modeling datasets, including WikiText-103, One Billion Word, Penn Treebank, and enwiki8. Notably, we improve the SoTA results from 1.06 to 0.99 in bpc on enwiki8, from 33.0 to 18.9 in perplexity on WikiText-103, and from 28.0 to 23.5 in perplexity on One Billion Word. Performance improves when the attention length increases during evaluation, and our best model attends to up to 1,600 words and 3,800 characters. To quantify the effective length of dependency, we devise a new metric and show that on WikiText-103 Transformer-XL manages to model dependency that is about 80% longer than recurrent networks and 450% longer than Transformer. Moreover, Transformer-XL is up to 1,800+ times faster than vanilla Transformer during evaluation.", "title": "" }, { "docid": "ea5431e8f2f1e197988cf1b52ee685ce", "text": "Prunus mume (mei), which was domesticated in China more than 3,000 years ago as ornamental plant and fruit, is one of the first genomes among Prunus subfamilies of Rosaceae been sequenced. Here, we assemble a 280M genome by combining 101-fold next-generation sequencing and optical mapping data. We further anchor 83.9% of scaffolds to eight chromosomes with genetic map constructed by restriction-site-associated DNA sequencing. Combining P. mume genome with available data, we succeed in reconstructing nine ancestral chromosomes of Rosaceae family, as well as depicting chromosome fusion, fission and duplication history in three major subfamilies. We sequence the transcriptome of various tissues and perform genome-wide analysis to reveal the characteristics of P. mume, including its regulation of early blooming in endodormancy, immune response against bacterial infection and biosynthesis of flower scent. The P. mume genome sequence adds to our understanding of Rosaceae evolution and provides important data for improvement of fruit trees.", "title": "" }, { "docid": "e2ea8ec9139837feb95ac432a63afe88", "text": "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.", "title": "" }, { "docid": "76e6c05e41c4e6d3c70c8fedec5c323b", "text": "Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing. In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.", "title": "" }, { "docid": "71723d953f1f4ace7c2501fd2c4e5a9f", "text": "Among all the unique characteristics of a human being, handwriting carries the richest information to gain the insights into the physical, mental and emotional state of the writer. Graphology is the art of studying and analysing handwriting, a scientific method used to determine a person’s personality by evaluating various features from the handwriting. The prime features of handwriting such as the page margins, the slant of the alphabets, the baseline etc. can tell a lot about the individual. To make this method more efficient and reliable, introduction of machines to perform the feature extraction and mapping to various personality traits can be done. This compliments the graphologists, and also increases the speed of analysing handwritten samples. Various approaches can be used for this type of computer aided graphology. In this paper, a novel approach of machine learning technique to implement the automated handwriting analysis tool is discussed.", "title": "" }, { "docid": "72ee3bf58497eddeda11f19488fc8e55", "text": "People can benefit from disclosing negative emotions or stigmatized facets of their identities, and psychologists have noted that imagery can be an effective medium for expressing difficult emotions. Social network sites like Instagram offer unprecedented opportunity for image-based sharing. In this paper, we investigate sensitive self-disclosures on Instagram and the responses they attract. We use visual and textual qualitative content analysis and statistical methods to analyze self-disclosures, associated comments, and relationships between them. We find that people use Instagram to engage in social exchange and story-telling about difficult experiences. We find considerable evidence of social support, a sense of community, and little aggression or support for harmful or pro-disease behaviors. Finally, we report on factors that influence engagement and the type of comments these disclosures attract. Personal narratives, food and beverage, references to illness, and self-appearance concerns are more likely to attract positive social support. Posts seeking support attract significantly more comments. CAUTION: This paper includes some detailed examples of content about eating disorders and self-injury illnesses.", "title": "" }, { "docid": "a59c9aa1b2f09534adf593150624aee4", "text": "Pan-sharpening is a process of acquiring a high resolution multispectral (MS) image by combining a low resolution MS image with a corresponding high resolution panchromatic (PAN) image. In this paper, we propose a new variational pan-sharpening method based on three basic assumptions: 1) the gradient of PAN image could be a linear combination of those of the pan-sharpened image bands; 2) the upsampled low resolution MS image could be a degraded form of the pan-sharpened image; and 3) the gradient in the spectrum direction of pan-sharpened image should be approximated to those of the upsampled low resolution MS image. An energy functional, whose minimizer is related to the best pan-sharpened result, is built based on these assumptions. We discuss the existence of minimizer of our energy and describe the numerical procedure based on the split Bregman algorithm. To verify the effectiveness of our method, we qualitatively and quantitatively compare it with some state-of-the-art schemes using QuickBird and IKONOS data. Particularly, we classify the existing quantitative measures into four categories and choose two representatives in each category for more reasonable quantitative evaluation. The results demonstrate the effectiveness and stability of our method in terms of the related evaluation benchmarks. Besides, the computation efficiency comparison with other variational methods also shows that our method is remarkable.", "title": "" }, { "docid": "9f4ed0a381bec3c334ec15dec27a8a24", "text": "Software code review, i.e., the practice of having other team members critique changes to a software system, is a well-established best practice in both open source and proprietary software domains. Prior work has shown that formal code inspections tend to improve the quality of delivered software. However, the formal code inspection process mandates strict review criteria (e.g., in-person meetings and reviewer checklists) to ensure a base level of review quality, while the modern, lightweight code reviewing process does not. Although recent work explores the modern code review process, little is known about the relationship between modern code review practices and long-term software quality. Hence, in this paper, we study the relationship between post-release defects (a popular proxy for long-term software quality) and: (1) code review coverage, i.e., the proportion of changes that have been code reviewed, (2) code review participation, i.e., the degree of reviewer involvement in the code review process, and (3) code reviewer expertise, i.e., the level of domain-specific expertise of the code reviewers. Through a case study of the Qt, VTK, and ITK projects, we find that code review coverage, participation, and expertise share a significant link with software quality. Hence, our results empirically confirm the intuition that poorly-reviewed code has a negative impact on software quality in large systems using modern reviewing tools.", "title": "" }, { "docid": "63efc2ce1756f64a0328ecb64cb9200b", "text": "Memory analysis has gained popularity in recent years proving to be an effective technique for uncovering malware in compromised computer systems. The process of memory acquisition presents unique evidentiary challenges since many acquisition techniques require code to be run on a potential compromised system, presenting an avenue for anti-forensic subversion. In this paper, we examine a number of simple anti-forensic techniques and test a representative sample of current commercial and free memory acquisition tools. We find that current tools are not resilient to very simple anti-forensic measures. We present a novel memory acquisition technique, based on direct page table manipulation and PCI hardware introspection, without relying on operating system facilities making it more difficult to subvert. We then evaluate this technique’s further vulnerability to subversion by considering more advanced anti-forensic attacks. a 2013 Johannes Stüttgen and Michael Cohen. Published by Elsevier Ltd. All rights", "title": "" }, { "docid": "c9b7ad5ce16e96d611c608b78d5549f0", "text": "Deep Neural Networks (DNNs) thrive in recent years in which Batch Normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the reduction operations. In this paper, we propose alleviating this problem through sampling only a small fraction of data for normalization at each iteration. Specifically, we model it as a statistical sampling problem and identify that by sampling less correlated data, we can largely reduce the requirement of the number of data for statistics estimation in BN, which directly simplifies the reduction operations. Based on this conclusion, we propose two sampling strategies, “Batch Sampling” (randomly select several samples from each batch) and “Feature Sampling” (randomly select a small patch from each feature map of all samples), that take both computational efficiency and sample correlation into consideration. Furthermore, we introduce an extremely simple variant of BN, termed as Virtual Dataset Normalization (VDN), that can normalize the activations well with few synthetical random samples. All the proposed methods are evaluated on various datasets and networks, where an overall training speedup by up to 20% on GPU is practically achieved without the support of any specialized libraries, and the loss on accuracy and convergence rate are negligible. Finally, we extend our work to the “micro-batch normalization” problem and yield comparable performance with existing approaches at the case of tiny batch size.", "title": "" }, { "docid": "499e2c0a0170d5b447548f85d4a9f402", "text": "OBJECTIVE\nTo discuss the role of proprioception in motor control and in activation of the dynamic restraints for functional joint stability.\n\n\nDATA SOURCES\nInformation was drawn from an extensive MEDLINE search of the scientific literature conducted in the areas of proprioception, motor control, neuromuscular control, and mechanisms of functional joint stability for the years 1970-1999.\n\n\nDATA SYNTHESIS\nProprioception is conveyed to all levels of the central nervous system. It serves fundamental roles for optimal motor control and sensorimotor control over the dynamic restraints.\n\n\nCONCLUSIONS/APPLICATIONS\nAlthough controversy remains over the precise contributions of specific mechanoreceptors, proprioception as a whole is an essential component to controlling activation of the dynamic restraints and motor control. Enhanced muscle stiffness, of which muscle spindles are a crucial element, is argued to be an important characteristic for dynamic joint stability. Articular mechanoreceptors are attributed instrumental influence over gamma motor neuron activation, and therefore, serve to indirectly influence muscle stiffness. In addition, articular mechanoreceptors appear to influence higher motor center control over the dynamic restraints. Further research conducted in these areas will continue to assist in providing a scientific basis to the selection and development of clinical procedures.", "title": "" }, { "docid": "3532bb1766e9cbe158112a62bdbde52f", "text": "A dual circularly polarized horn antenna, which employs a chiral metamaterial composed of two-layered periodic metallic arc structure, is presented. The whole antenna composite has functions of polarization transformation and band-pass filter. The designed antenna produces left-handed circularly polarized wave in the band from 12.4 GHz to 12.5 GHz, and realizes right-handed circularly polarized wave in the range of 14.2 GHz-14.4 GHz. Due to low loss characteristic of the chiral metamaterial, the measured gains are only reduced by about 0.6 dB at the above two operation frequencies, compared with single horn antenna. The axial ratios are 1.05 dB at 12.45 GHz and 0.95 dB at14.35 GHz.", "title": "" }, { "docid": "e9aac361f8ca1bb8f10409859aef718d", "text": "MapReduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Hadoop-an open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data-locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this paper, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a dataintensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.", "title": "" }, { "docid": "91abbad1c392dd4fcaf9c75b468c5e2d", "text": "Face alignment is very crucial to the task of face attributes recognition. The performance of face attributes recognition would notably degrade if the fiducial points of the original face images are not precisely detected due to large lighting, pose and occlusion variations. In order to alleviate this problem, we propose a spatial transform based deep CNNs to improve the performance of face attributes recognition. In this approach, we first learn appropriate transformation parameters by a carefully designed spatial transformer network called LoNet to align the original face images, and then recognize the face attributes based on the aligned face images using a deep network called ClNet. To the best of our knowledge, this is the first attempt to use spatial transformer network in face attributes recognition task. Extensive experiments on two large and challenging databases (CelebA and LFWA) clearly demonstrate the effectiveness of the proposed approach over the current state-of-the-art.", "title": "" }, { "docid": "c9f2fd6bdcca5e55c5c895f65768e533", "text": "We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.", "title": "" } ]
scidocsrr
467f46c9a94b37b93c02e24ad5f45ec9
TurboQuad: A Novel Leg–Wheel Transformable Robot With Smooth and Fast Behavioral Transitions
[ { "docid": "e5da4f6a9abd5f1c751a366768d8456c", "text": "We report on the design, optimization, and performance evaluation of a new wheel-leg hybrid robot. This robot utilizes a novel transformable wheel that combines the advantages of both circular and legged wheels. To minimize the complexity of the design, the transformation process of the wheel is passive, which eliminates the need for additional actuators. A new triggering mechanism is also employed to increase the transformation success rate. To maximize the climbing ability in legged-wheel mode, the design parameters for the transformable wheel and robot are tuned based on behavioral analyses. The performance of our new development is evaluated in terms of stability, energy efficiency, and the maximum height of an obstacle that the robot can climb over. With the new transformable wheel, the robot can climb over an obstacle 3.25 times as tall as its wheel radius, without compromising its driving ability at a speed of 2.4 body lengths/s with a specific resistance of 0.7 on a flat surface.", "title": "" } ]
[ { "docid": "b5009853d22801517431f46683b235c2", "text": "Artificial intelligence (AI) is the study of how to make computers do things which, at the moment, people do better. Thus Strong AI claims that in near future we will be surrounded by such kinds of machine which can completely works like human being and machine could have human level intelligence. One intention of this article is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.The science of Artificial Intelligence (AI) might be defined as the construction of intelligent systems and their analysis.", "title": "" }, { "docid": "9611686ff4eedf047460becec43ce59d", "text": "We propose a novel location-based second-factor authentication solution for modern smartphones. We demonstrate our solution in the context of point of sale transactions and show how it can be effectively used for the detection of fraudulent transactions caused by card theft or counterfeiting. Our scheme makes use of Trusted Execution Environments (TEEs), such as ARM TrustZone, commonly available on modern smartphones, and resists strong attackers, even those capable of compromising the victim phone applications and OS. It does not require any changes in the user behavior at the point of sale or to the deployed terminals. In particular, we show that practical deployment of smartphone-based second-factor authentication requires a secure enrollment phase that binds the user to his smartphone TEE and allows convenient device migration. We then propose two novel enrollment schemes that resist targeted attacks and provide easy migration. We implement our solution within available platforms and show that it is indeed realizable, can be deployed with small software changes, and does not hinder user experience.", "title": "" }, { "docid": "36f31dea196f2d7a74bc442f1c184024", "text": "The causes of Parkinson's disease (PD), the second most common neurodegenerative disorder, are still largely unknown. Current thinking is that major gene mutations cause only a small proportion of all cases and that in most cases, non-genetic factors play a part, probably in interaction with susceptibility genes. Numerous epidemiological studies have been done to identify such non-genetic risk factors, but most were small and methodologically limited. Larger, well-designed prospective cohort studies have only recently reached a stage at which they have enough incident patients and person-years of follow-up to investigate possible risk factors and their interactions. In this article, we review what is known about the prevalence, incidence, risk factors, and prognosis of PD from epidemiological studies.", "title": "" }, { "docid": "b188f936fb618e84a9d93343778a2adc", "text": "Face multi-attribute prediction benefits substantially from multi-task learning (MTL), which learns multiple face attributes simultaneously to achieve shared or mutually related representations of different attributes. The most widely used MTL convolutional neural network is heuristically or empirically designed by sharing all of the convolutional layers and splitting at the fully connected layers for task-specific losses. However, it is improper to view all low and midlevel features for different attributes as being the same, especially when these attributes are only loosely related. In this paper, we propose a novel multi-attribute tensor correlation neural network (MTCN) for face attribute prediction. The structure shares the information in low-level features (e.g., the first two convolutional layers) but splits that in high-level features (e.g., from the third convolutional layer to the fully connected layer). At the same time, during high-level feature extraction, each subnetwork (e.g., AgeNet, Gender-Net, ..., and Smile-Net) excavates closely related features from other networks to enhance its features. Then, we project the features of the C9 layers of the finetuned subnetworks into a highly correlated space by using a novel tensor correlation analysis algorithm (NTCCA). The final face attribute prediction is made based on the correlation matrix. Experimental results on benchmarks with multiple face attributes (CelebA and LFWA) show that the proposed approach has superior performance compared to state-of-the-art methods.", "title": "" }, { "docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d", "text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.", "title": "" }, { "docid": "7a18b4e266cb353e523addfacbdf5bdf", "text": "The field of image composition is constantly trying to improve the ways in which an image can be altered and enhanced. While this is usually done in the name of aesthetics and practicality, it also provides tools that can be used to maliciously alter images. In this sense, the field of digital image forensics has to be prepared to deal with the influx of new technology, in a constant arms-race. In this paper, the current state of this armsrace is analyzed, surveying the state-of-the-art and providing means to compare both sides. A novel scale to classify image forensics assessments is proposed, and experiments are performed to test composition techniques in regards to different forensics traces. We show that even though research in forensics seems unaware of the advanced forms of image composition, it possesses the basic tools to detect it.", "title": "" }, { "docid": "88f60c6835fed23e12c56fba618ff931", "text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.", "title": "" }, { "docid": "38d791ebe063bd58a04afd21e6d8f25a", "text": "The design of a Web search evaluation metric is closely related with how the user's interaction process is modeled. Each behavioral model results in a different metric used to evaluate search performance. In these models and the user behavior assumptions behind them, when a user ends a search session is one of the prime concerns because it is highly related to both benefit and cost estimation. Existing metric design usually adopts some simplified criteria to decide the stopping time point: (1) upper limit for benefit (e.g. RR, AP); (2) upper limit for cost (e.g. Precision@N, DCG@N). However, in many practical search sessions (e.g. exploratory search), the stopping criterion is more complex than the simplified case. Analyzing benefit and cost of actual users' search sessions, we find that the stopping criteria vary with search tasks and are usually combination effects of both benefit and cost factors. Inspired by a popular computer game named Bejeweled, we propose a Bejeweled Player Model (BPM) to simulate users' search interaction processes and evaluate their search performances. In the BPM, a user stops when he/she either has found sufficient useful information or has no more patience to continue. Given this assumption, a new evaluation framework based on upper limits (either fixed or changeable as search proceeds) for both benefit and cost is proposed. We show how to derive a new metric from the framework and demonstrate that it can be adopted to revise traditional metrics like Discounted Cumulative Gain (DCG), Expected Reciprocal Rank (ERR) and Average Precision (AP). To show effectiveness of the proposed framework, we compare it with a number of existing metrics in terms of correlation between user satisfaction and the metrics based on a dataset that collects users' explicit satisfaction feedbacks and assessors' relevance judgements. Experiment results show that the framework is better correlated with user satisfaction feedbacks.", "title": "" }, { "docid": "45d6863e54b343d7a081e79c84b81e65", "text": "In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim", "title": "" }, { "docid": "de2bbd675430ffcb490f090f8baec98d", "text": "In this letter, we analyze the electromagnetic characteristic of a frequency selective surface (FSS) radome using the physical optics (PO) method and ray tracing technique. We consider the cross-loop slot FSS and the tangent-ogive radome. Radiation pattern of the FSS radome is computed to illustrate the electromagnetic transmission characteristic.", "title": "" }, { "docid": "961372a5e1b21053894040a11e946c8d", "text": "The main purpose of this paper is to introduce an approach to design a DC-DC boost converter with constant output voltage for grid connected photovoltaic application system. The boost converter is designed to step up a fluctuating solar panel voltage to a higher constant DC voltage. It uses voltage feedback to keep the output voltage constant. To do so, a microcontroller is used as the heart of the control system which it tracks and provides pulse-width-modulation signal to control power electronic device in boost converter. The boost converter will be able to direct couple with grid-tied inverter for grid connected photovoltaic system. Simulations were performed to describe the proposed design. Experimental works were carried out with the designed boost converter which has a power rating of 100 W and 24 V output voltage operated in continuous conduction mode at 20 kHz switching frequency. The test results show that the proposed design exhibits a good performance.", "title": "" }, { "docid": "1c576cf604526b448f0264f2c39f705a", "text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.", "title": "" }, { "docid": "4e93ce8e5a6175dd558954e560d7ddc2", "text": "This paper presents a new type of narrow band filter with good electrical performance and manufacturing flexibility, based on the newly introduced groove gap waveguide technology. The designed third and fifth-order filters work at Ku band with 1% fractional bandwidth. These filter structures are manufactured with an allowable gap between two metal blocks, in such a way that there is no requirement for electrical contact and alignment between the blocks. This is a major manufacturing advantage compared to normal rectangular waveguide filters. The measured results of the manufactured filters show reasonably good agreement with the full-wave simulated results, without any tuning or adjustments.", "title": "" }, { "docid": "59ac2e47ed0824eeba1621673f2dccf5", "text": "In this paper we present a framework for grasp planning with a humanoid robot arm and a five-fingered hand. The aim is to provide the humanoid robot with the ability of grasping objects that appear in a kitchen environment. Our approach is based on the use of an object model database that contains the description of all the objects that can appear in the robot workspace. This database is completed with two modules that make use of this object representation: an exhaustive offline grasp analysis system and a real-time stereo vision system. The offline grasp analysis system determines the best grasp for the objects by employing a simulation system, together with CAD models of the objects and the five-fingered hand. The results of this analysis are added to the object database using a description suited to the requirements of the grasp execution modules. A stereo camera system is used for a real-time object localization using a combination of appearance-based and model-based methods. The different components are integrated in a controller architecture to achieve manipulation task goals for the humanoid robot", "title": "" }, { "docid": "e81cffe3f2f716520ede92d482ddab34", "text": "An active research trend is to exploit the consensus mechanism of cryptocurrencies to secure the execution of distributed applications. In particular, some recent works have proposed fair lotteries which work on Bitcoin. These protocols, however, require a deposit from each player which grows quadratically with the number of players. We propose a fair lottery on Bitcoin which only requires a constant deposit.", "title": "" }, { "docid": "c757e54a14beec3b4930ad050a16d311", "text": "The University Class Scheduling Problem (UCSP) is concerned with assigning a number of courses to classrooms taking into consideration constraints like classroom capacities and university regulations. The problem also attempts to optimize the performance criteria and distribute the courses fairly to classrooms depending on the ratio of classroom capacities to course enrollments. The problem is a classical scheduling problem and considered to be NP-complete. It has received some research during the past few years given its wide use in colleges and universities. Several formulations and algorithms have been proposed to solve scheduling problems, most of which are based on local search techniques. In this paper, we propose a complete approach using integer linear programming (ILP) to solve the problem. The ILP model of interest is developed and solved using the three advanced ILP solvers based on generic algorithms and Boolean Satisfiability (SAT) techniques. SAT has been heavily researched in the past few years and has lead to the development of powerful 0-1 ILP solvers that can compete with the best available generic ILP solvers. Experimental results indicate that the proposed model is tractable for reasonable-sized UCSP problems. Index Terms — University Class Scheduling, Optimization, Integer Linear Programming (ILP), Boolean Satisfiability.", "title": "" }, { "docid": "02b6bcef39a21b14ce327f3dc9671fef", "text": "We've all heard tales of multimillion dollar mistakes that somehow ran off course. Are software projects that risky or do managers need to take a fresh approach when preparing for such critical expeditions? Software projects are notoriously difficult to manage and too many of them end in failure. In 1995, annual U.S. spending on software projects reached approximately $250 billion and encompassed an estimated 175,000 projects [6]. Despite the costs involved, press reports suggest that project failures are occurring with alarming frequency. In 1995, U.S companies alone spent an estimated $59 billion in cost overruns on IS projects and another $81 billion on canceled software projects [6]. One explanation for the high failure rate is that managers are not taking prudent measures to assess and manage the risks involved in these projects. is Advocates of software project risk management claim that by countering these threats to success, the incidence of failure can be reduced [4, 5]. Before we can develop meaningful risk management strategies, however, we must identify these risks. Furthermore, the relative importance of these risks needs to be established, along with some understanding as to why certain risks are perceived to be more important than others. This is necessary so that managerial attention can be focused on the areas that constitute the greatest threats. Finally, identified risks must be classified in a way that suggests meaningful risk mitigation strategies. Here, we report the results of a Delphi study in which experienced software project managers identified and ranked the most important risks. The study led not only to the identification of risk factors and their relative importance, but also to novel insights into why project managers might view certain risks as being more important than others. Based on these insights, we introduce a framework for classifying software project risks and discuss appropriate strategies for managing each type of risk. Since the 1970s, both academics and practitioners have written about risks associated with managing software projects [1, 2, 4, 5, 7, 8]. Unfortunately , much of what has been written on risk is based either on anecdotal evidence or on studies limited to a narrow portion of the development process. Moreover, no systematic attempts have been made to identify software project risks by tapping the opinions of those who actually have experience in managing such projects. With a few exceptions [3, 8], there has been little attempt to understand the …", "title": "" }, { "docid": "79811b3cfec543470941e9529dc0ab24", "text": "We present a novel method for learning and predicting the affordances of an object based on its physical and visual attributes. Affordance prediction is a key task in autonomous robot learning, as it allows a robot to reason about the actions it can perform in order to accomplish its goals. Previous approaches to affordance prediction have either learned direct mappings from visual features to affordances, or have introduced object categories as an intermediate representation. In this paper, we argue that physical and visual attributes provide a more appropriate mid-level representation for affordance prediction, because they support informationsharing between affordances and objects, resulting in superior generalization performance. In particular, affordances are more likely to be correlated with the attributes of an object than they are with its visual appearance or a linguistically-derived object category. We provide preliminary validation of our method experimentally, and present empirical comparisons to both the direct and category-based approaches of affordance prediction. Our encouraging results suggest the promise of the attributebased approach to affordance prediction.", "title": "" }, { "docid": "257d1de3b45533ca49e0a78ba55c841e", "text": "Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.” This “human-in-the-loop” can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.", "title": "" } ]
scidocsrr
6c7e3ef92a24269304570fa71d090738
Experiences inside the Ubiquitous Oulu Smart City
[ { "docid": "3d9fe9c30d09a9e66f7339b0ad24edb7", "text": "Due to progress in wired and wireless home networking, sensor networks, networked appliances, mechanical and control engineering, and computers, we can build smart homes, and many smart home projects are currently proceeding throughout the world. However, we have to be careful not to repeat the same mistake that was made with home automation technologies that were booming in the 1970s. That is, [total?] automation should not be a goal of smart home technologies. I believe the following points are important in construction of smart homes from users¿ viewpoints: development of interface technologies between humans and systems for detection of human intensions, feelings, and situations; improvement of system knowledge; and extension of human activity support outside homes to the scopes of communities, towns, and cities.", "title": "" } ]
[ { "docid": "3ec70222394018f1d889692ae850b5ca", "text": "In this paper, we proposed an automatic method to segment text from complex background for recognition task. First, a rule-based sampling method is proposed to get portion of the text pixels. Then, the sampled pixels are used for training Gaussian mixture models of intensity and hue components in HSI color space. Finally, the trained GMMs together with the spatial connectivity information are used for segment all of text pixels form their background. We used the word recognition rate to evaluate the segmentation result. Experiments results show that the proposed algorithm can work fully automatically and performs much better than the traditional methods.", "title": "" }, { "docid": "9e3de4720dade2bb73d78502d7cccc8b", "text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "ff6cec55a05338f78b5ad57d2bc6922a", "text": "Developing a virtual 3D environment by using game engine is a strategy to incorporate various multimedia data into one platform. The characteristic of game engine that is preinstalled with interactive and navigation tools allows users to explore and engage with the game objects. However, most CAD and GIS applications are not equipped with 3D tools and navigation systems intended to the user experience. In particular, 3D game engines provide standard 3D navigation tools as well as any programmable view to create engaging navigation thorough the virtual environment. By using a game engine, it is possible to create other interaction such as object manipulation, non playing character (NPC) interaction with player and/or environment. We conducted analysis on previous game engines and experiment on urban design project with Unity3D game engine for visualization and interactivity. At the end, we present the advantages and limitations using game technology as visual representation tool for architecture and urban design studies.", "title": "" }, { "docid": "56aa0d8c7d0fa135f5b50ee0aa744cbd", "text": "We explored cultural and historical variations in concepts of happiness. First, we analyzed the definitions of happiness in dictionaries from 30 nations to understand cultural similarities and differences in happiness concepts. Second, we analyzed the definition of happiness in Webster's dictionaries from 1850 to the present day to understand historical changes in American English. Third, we coded the State of the Union addresses given by U.S. presidents from 1790 to 2010. Finally, we investigated the appearance of the phrases happy nation versus happy person in Google's Ngram Viewer from 1800 to 2008. Across cultures and time, happiness was most frequently defined as good luck and favorable external conditions. However, in American English, this definition was replaced by definitions focused on favorable internal feeling states. Our findings highlight the value of a historical perspective in the study of psychological concepts.", "title": "" }, { "docid": "11a000ec43847bae955160cf7ea3106d", "text": "Malicious activities on the Internet are one of the most dangerous threats to Internet users and organizations. Malicious software controlled remotely is addressed as one of the most critical methods for executing the malicious activities. Since blocking domain names for command and control (C&C) of the malwares by analyzing their Domain Name System (DNS) activities has been the most effective and practical countermeasure, attackers attempt to hide their malwares by adopting several evasion techniques, such as client sub-grouping and domain flux on DNS activities. A common feature of the recently developed evasion techniques is the utilization of multiple domain names for render malware DNS activities temporally and spatially more complex. In contrast to analyzing the DNS activities for a single domain name, detecting the malicious DNS activities for multiple domain names is not a simple task. The DNS activities of malware that uses multiple domain names, termed multi-domain malware, are sparser and less synchronized with respect to space and time. In this paper, we introduce a malware activity detection mechanism, GMAD: Graph-based Malware Activity Detection that utilizes a sequence of DNS queries in order to achieve robustness against evasion techniques. GMAD uses a graph termed Domain Name Travel Graph which expresses DNS query sequences to detect infected clients and malicious domain names. In addition to detecting malware C&C domain names, GMAD detects malicious DNS activities such as blacklist checking and fake DNS querying. To detect malicious domain names utilized to malware activities, GMAD applies domain name clustering using the graph structure and determines malicious clusters by referring to public blacklists. Through experiments with four sets of DNS traffic captured in two ISP networks in the U.S. and South Korea, we show that GMAD detected thousands of malicious domain names that had neither been blacklisted nor detected through group activity of DNS clients. In a detection accuracy evaluation, GMAD showed an accuracy rate higher than 99% on average, with a higher than 90% precision and lower than 0:5% false positive rate. It is shown that the proposed method is effective for detecting multi-domain malware activities irrespective of evasion techniques. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b7c7984f10f5e55de0c497798b1d64ac", "text": "The relationships between personality traits and performance are often assumed to be linear. This assumption has been challenged conceptually and empirically, but results to date have been inconclusive. In the current study, we took a theory-driven approach in systematically addressing this issue. Results based on two different samples generally supported our expectations of the curvilinear relationships between personality traits, including Conscientiousness and Emotional Stability, and job performance dimensions, including task performance, organizational citizenship behavior, and counterproductive work behaviors. We also hypothesized and found that job complexity moderated the curvilinear personality–performance relationships such that the inflection points after which the relationships disappear were lower for low-complexity jobs than they were for high-complexity jobs. This finding suggests that high levels of the two personality traits examined are more beneficial for performance in high- than low-complexity jobs. We conclude by discussing the implications of these findings for the use of personality in personnel selection.", "title": "" }, { "docid": "9747e2be285a5739bd7ee3b074a20ffc", "text": "While software metrics are a generally desirable feature in the software management functions of project planning and project evaluation, they are of especial importance with a new technology such as the object-oriented approach. This is due to the significant need to train software engineers in generally accepted object-oriented principles. This paper presents theoretical work that builds a suite of metrics for object-oriented design. In particular, these metrics are based upon measurement theory and are informed by the insights of experienced object-oriented software developers. The proposed metrics are formally evaluated against a widelyaccepted list of software metric evaluation criteria.", "title": "" }, { "docid": "a4473c2cc7da3fb5ee52b60cee24b9b9", "text": "The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN\" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour", "title": "" }, { "docid": "3ddf6fab70092eade9845b04dd8344a0", "text": "Fractional Fourier transform (FRFT) is a generalization of the Fourier transform, rediscovered many times over the past 100 years. In this paper, we provide an overview of recent contributions pertaining to the FRFT. Specifically, the paper is geared toward signal processing practitioners by emphasizing the practical digital realizations and applications of the FRFT. It discusses three major topics. First, the manuscripts relates the FRFT to other mathematical transforms. Second, it discusses various approaches for practical realizations of the FRFT. Third, we overview the practical applications of the FRFT. From these discussions, we can clearly state that the FRFT is closely related to other mathematical transforms, such as time–frequency and linear canonical transforms. Nevertheless, we still feel that major contributions are expected in the field of the digital realizations and its applications, especially, since many digital realizations of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "f21e0b6062b88a14e3e9076cdfd02ad5", "text": "Beyond being facilitators of human interactions, social networks have become an interesting target of research, providing rich information for studying and modeling user’s behavior. Identification of personality-related indicators encrypted in Facebook profiles and activities are of special concern in our current research efforts. This paper explores the feasibility of modeling user personality based on a proposed set of features extracted from the Facebook data. The encouraging results of our study, exploring the suitability and performance of several classification techniques, will also be presented.", "title": "" }, { "docid": "ed097b44837a57ad0053ae06a95f1543", "text": "For underwater videos, the performance of object tracking is greatly affected by illumination changes, background disturbances and occlusion. Hence, there is a need to have a robust function that computes image similarity, to accurately track the moving object. In this work, a hybrid model that incorporates the Kalman Filter, a Siamese neural network and a miniature neural network has been developed for object tracking. It was observed that the usage of the Siamese network to compute image similarity significantly improved the robustness of the tracker. Although the model was developed for underwater videos, it was found that it performs well for both underwater and human surveillance videos. A metric has been defined for analyzing detections-to-tracks mapping accuracy. Tracking results have been analyzed using Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)metrics.", "title": "" }, { "docid": "965b13ed073b4f3d1c97beffe4db1397", "text": "The purpose of this study was to develop a method of classifying cancers to specific diagnostic categories based on their gene expression signatures using artificial neural networks (ANNs). We trained the ANNs using the small, round blue-cell tumors (SRBCTs) as a model. These cancers belong to four distinct diagnostic categories and often present diagnostic dilemmas in clinical practice. The ANNs correctly classified all samples and identified the genes most relevant to the classification. Expression of several of these genes has been reported in SRBCTs, but most have not been associated with these cancers. To test the ability of the trained ANN models to recognize SRBCTs, we analyzed additional blinded samples that were not previously used for the training procedure, and correctly classified them in all cases. This study demonstrates the potential applications of these methods for tumor diagnosis and the identification of candidate targets for therapy.", "title": "" }, { "docid": "d7fb7e12e0ec941fef8a721f63c91337", "text": "This paper presents navigation system for an omni-directional AGV (automatic guided vehicle) with Mecanum wheels. The Mecanum wheel, one design for the wheel which can move in any direction, is a conventional wheel with a series of rollers attached to its circumference. The localization techniques for the general mobile robot use basically encoder. Otherwise, they use gyro and electronic compass with encoder. However, it is difficult to use the encoder because in the Mecanum wheel the slip occurs frequently by the rollers attached to conventional wheel's circumference. Hence, we propose the localization of the omnidirectional AGV with the Mecanum wheel. The proposed localization uses encoder, gyro, and accelerometer. In this paper, we ourselves designed and made the AGV with the Mecanum wheels for experiment. And we analyzed the accuracy of the localization when the AGV moves sideways a 20m distance at about 20cm/s and 38cm/s, respectively. In experimental result, we verified that the accuracies of the proposed localization are 27.4944mm and 29.2521mm respectively.", "title": "" }, { "docid": "675007890407b7e8a7d15c1255e77ec6", "text": "This study investigated the influence of the completeness of CRM relational information processes on customer-based relational performance and profit performance. In addition, interaction orientation and CRM readiness were adopted as moderators on the relationship between CRM relational information processes and customer-based performance. Both qualitative and quantitative approaches were applied in this study. The results revealed that the completeness of CRM relational information processes facilitates customer-based relational performance (i.e., customer satisfaction, and positive WOM), and in turn enhances profit performance (i.e., efficiency with regard to identifying, acquiring and retaining, and converting unprofitable customers to profitable ones). The alternative model demonstrated that both interaction orientation and CRM readiness play a mediating role in the relationship between information processes and relational performance. Managers should strengthen the completeness and smoothness of CRM information processes, should increase the level of interactional orientation with customers and should maintain firm CRM readiness to service their customers. The implications of this research and suggestions for managers were also discussed.", "title": "" }, { "docid": "dc2c10774d761875fb9de0c2953af199", "text": "The formation of precipitates, especially along austenite grain boundaries, greatly affects the formation of transverse cracks on the surface of continuous-cast steel. The steel composition and cooling history influences the formation of precipitates, and the higher temperature and corresponding larger grain growth rate under oscillation marks or surface depressions also have an important effect on crack problems. This paper develops a model to predict and track the amount, composition and size distribution of precipitates and the grain size in order to predict the susceptibility of different steel grades to ductility problems during continuous casting processes. The results are important for controlled cooling of microalloyed steels to prevent cracks and assure product quality.", "title": "" }, { "docid": "77562b3fdfb57089d1490fd3f1b68a77", "text": "Recent proposed rulemakings from the Federal Communications Commission in the United States offers the hope of unique access to valuable spectrum; so-called television whitespace (TVWS). Use of this spectrum is contingent upon the protection of the incumbent occupants of the proposed allocation. Television signals are among the most powerful terrestrial RF transmissions on Earth. Even so, detection of these signals to the required levels sufficient to protect these services has proven daunting. Supplemental techniques, such as geo-location, mitigate these challenges for fixed TV broadcast; however, other nomadic low power incumbents also occupy the TVWS spectrum. The most common of these are wireless microphones, a subset of which are licensed and entitled to protection. These devices are allowed a maximum conducted power level of 50 mW and 250 mW on the VHF and UHF channels, respectively. Critical to day-to-day television operations, these devices must also be afforded protection from unlicensed transmitters. Wireless microphones often operate at power levels of 25 mW or less, with inefficient antennas placed physically near the body, yielding effective radiated power levels of 5 to 10 mW, often times even less. In addition, the emissions from these devices are often audio-companded FM, making legitimate, licensed operations indistinguishable from narrowband unlicensed transmissions and other discrete carriers. To that end the IEEE 802.22 working group established task group 1 (TG1) to develop a standard for a protective, disabling beacon method capable of insuring detection of legitimate devices.", "title": "" }, { "docid": "205ed1eba187918ac6b4a98da863a6f2", "text": "Since the first papers on asymptotic waveform evaluation (AWE), Pade-based reduced order models have become standard for improving coupled circuit-interconnect simulation efficiency. Such models can be accurately computed using bi-orthogonalization algorithms like Pade via Lanczos (PVL), but the resulting Pade approximates can still be unstable even when generated from stable RLC circuits. For certain classes of RC circuits it has been shown that congruence transforms, like the Arnoldi algorithm, can generate guaranteed stable and passive reduced-order models. In this paper we present a computationally efficient model-order reduction technique, the coordinate-transformed Arnoldi algorithm, and show that this method generates arbitrarily accurate and guaranteed stable reduced-order models for RLC circuits. Examples are presented which demonstrates the enhanced stability and efficiency of the new method.", "title": "" }, { "docid": "74fd65e8298a95b61bc323d9435eaa05", "text": "Next-generation communication systems have to comply with very strict requirements for increased flexibility in heterogeneous environments, high spectral efficiency, and agility of carrier aggregation. This fact motivates research in advanced multicarrier modulation (MCM) schemes, such as filter bank-based multicarrier (FBMC) modulation. This paper focuses on the offset quadrature amplitude modulation (OQAM)-based FBMC variant, known as FBMC/OQAM, which presents outstanding spectral efficiency and confinement in a number of channels and applications. Its special nature, however, generates a number of new signal processing challenges that are not present in other MCM schemes, notably, in orthogonal-frequency-division multiplexing (OFDM). In multiple-input multiple-output (MIMO) architectures, which are expected to play a primary role in future communication systems, these challenges are intensified, creating new interesting research problems and calling for new ideas and methods that are adapted to the particularities of the MIMO-FBMC/OQAM system. The goal of this paper is to focus on these signal processing problems and provide a concise yet comprehensive overview of the recent advances in this area. Open problems and associated directions for future research are also discussed.", "title": "" }, { "docid": "f6c1aa22e2afd24a6ad111d5dfdfc3f3", "text": "This work describes the development of a social chatbot for the football domain. The chatbot, named chatbol, aims at answering a wide variety of questions related to the Spanish football league “La Liga”. Chatbol is deployed as a Slack client for text-based input interaction with users. One of the main Chatbol’s components, a NLU block, is trained to extract the intents and associated entities related to user’s questions about football players, teams, trainers and fixtures. The information for the entities is obtained by making sparql queries to Wikidata site in real time. Then, the retrieved data is used to update the specific chatbot responses. As a fallback strategy, a retrieval-based conversational engine is incorporated to the chatbot system. It allows for a wider variety and freedom of responses, still football oriented, for the case when the NLU module was unable to reply with high confidence to the user. The retrieval-based response database is composed of real conversations collected both from a IRC football channel and from football-related excerpts picked up across movie captions, extracted from the OpenSubtitles database.", "title": "" }, { "docid": "bf3450649fdf5d5bb4ee89fbaf7ec0ff", "text": "In this study, we propose a research model to assess the effect of a mobile health (mHealth) app on exercise motivation and physical activity of individuals based on the design and self-determination theory. The research model is formulated from the perspective of motivation affordance and gamification. We will discuss how the use of specific gamified features of the mHealth app can trigger/afford corresponding users’ exercise motivations, which further enhance users’ participation in physical activity. We propose two hypotheses to test the research model using a field experiment. We adopt a 3-phase longitudinal approach to collect data in three different time zones, in consistence with approach commonly adopted in psychology and physical activity research, so as to reduce the common method bias in testing the two hypotheses.", "title": "" } ]
scidocsrr
d666574dab00a7f6a9d30717ee302bd3
Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review
[ { "docid": "8f4a0c6252586fa01133f9f9f257ec87", "text": "The pls package implements principal component regression (PCR) and partial least squares regression (PLSR) in R (R Development Core Team 2006b), and is freely available from the Comprehensive R Archive Network (CRAN), licensed under the GNU General Public License (GPL). The user interface is modelled after the traditional formula interface, as exemplified by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a flexible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coefficients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.", "title": "" } ]
[ { "docid": "ff56bae298b25accf6cd8c2710160bad", "text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.", "title": "" }, { "docid": "cd71e990546785bd9ba0c89620beb8d2", "text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. In this work, we use various clustering approaches of data mining to analyse the crime data of Tamilnadu. The crime data is extracted from National Crime Records Bureau (NCRB) of India. It consists of crime information about six cities namely Chennai, Coimbatore, Salem, Madurai, Thirunelvelli and Thiruchirapalli from the year 2000–2014 with 1760 instances and 9 attributes to represent the instances. K-Means clustering, Agglomerative clustering and Density Based Spatial Clustering with Noise (DBSCAN) algorithms are used to cluster crime activities based on some predefined cases and the results of these clustering are compared to find the best suitable clustering algorithm for crime detection. The result of K-Means clustering algorithm is visualized using Google Map for interactive and easy understanding. The K-Nearest Neighbor (KNN) classification is used for crime prediction. The performance of each clustering algorithms are evaluated using the metrics such as precision, recall and F-measure, and the results are compared. This work helps the law enforcement agencies to predict and detect crimes in Tamilnadu with improved accuracy and thus reduces the crime rate.", "title": "" }, { "docid": "f6e791e85d8570a9f10b45e8f028683d", "text": "We present a smartphone-based system for real-time tele-monitoring of physical activity in patients with chronic heart-failure (CHF). We recently completed a pilot study with 15 subjects to evaluate the feasibility of the proposed monitoring in the real world and examine its requirements, privacy implications, usability, and other challenges encountered by the participants and healthcare providers. Our tele-monitoring system was designed to assess patient activity via minute-by-minute energy expenditure (EE) estimated from accelerometry. In addition, we tracked relative user location via global positioning system (GPS) to track outdoors activity and measure walking distance. The system also administered daily surveys to inquire about vital signs and general cardiovascular symptoms. The collected data were securely transmitted to a central server where they were analyzed in real time and were accessible to the study medical staff to monitor patient health status and provide medical intervention if needed. Although the system was designed for tele-monitoring individuals with CHF, the challenges, privacy considerations, and lessons learned from this pilot study apply to other chronic health conditions, such as diabetes and hypertension, that would benefit from continuous monitoring through mobile-health (mHealth) technologies.", "title": "" }, { "docid": "64cefd949f61afe81fbbb9ca1159dd4a", "text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR", "title": "" }, { "docid": "1d949b64320fce803048b981ae32ce38", "text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.", "title": "" }, { "docid": "f61d5c1b0c17de6aab8a0eafedb46311", "text": "The use of social media creates the opportunity to turn organization-wide knowledge sharing in the workplace from an intermittent, centralized knowledge management process to a continuous online knowledge conversation of strangers, unexpected interpretations and re-uses, and dynamic emergence. We theorize four affordances of social media representing different ways to engage in this publicly visible knowledge conversations: metavoicing, triggered attending, network-informed associating, and generative role-taking. We further theorize mechanisms that affect how people engage in the knowledge conversation, finding that some mechanisms, when activated, will have positive effects on moving the knowledge conversation forward, but others will have adverse consequences not intended by the organization. These emergent tensions become the basis for the implications we draw.", "title": "" }, { "docid": "ea87bfc0d6086e367e8950b445529409", "text": " Queue stability (Chapter 2.1)  Scheduling for stability, capacity regions (Chapter 2.3)  Linear programs (Chapter 2.3, Chapter 3)  Energy optimality (Chapter 3.2)  Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6)  Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3)  Inequality constraints and virtual queues (Chapter 4.4)  Drift-plus-penalty algorithm (Chapter 4.5)  Performance and delay tradeoffs (Chapter 3.2, 4.5)  Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)", "title": "" }, { "docid": "b52322509c5bed43b0de04847dd947a9", "text": "Chapter 1 presented a description of the ECG in terms of its etiology and clinical features, and Chapter 2 an overview of the possible sources of error introduced in the hardware collection and data archiving stages. With this groundwork in mind, this chapter is intended to introduce the reader to the ECG using a signal processing approach. The ECG typically exhibits both persistent features (such as the average PQRS-T morphology and the short-term average heart rate, or average RR interval), and nonstationary features (such as the individual RR and QT intervals, and longterm heart rate trends). Since changes in the ECG are quasi-periodic (on a beatto-beat, daily, and perhaps even monthly basis), the frequency can be quantified in both statistical terms (mean, variance) and via spectral estimation methods. In essence, all these statistics quantify the power or degree to which an oscillation is present in a particular frequency band (or at a particular scale), often expressed as a ratio to power in another band. Even for scale-free approaches (such as wavelets), the process of feature extraction tends to have a bias for a particular scale which is appropriate for the particular data set being analyzed. ECG statistics can be evaluated directly on the ECG signal, or on features extracted from the ECG. The latter category can be broken down into either morphology-based features (such as ST level) or timing-based statistics (such as heart rate variability). Before discussing these derived statistics, an overview of the ECG itself is given.", "title": "" }, { "docid": "bc66c4c480569a21fdb593500c7e76cf", "text": "Smallholder subsistence agriculture in the rural Eastern Cape Province is recognised as one of the major contributors to food security among the resourced-poor household. However, subsistence agriculture is thought to be unsustainable in the ever changing social, economic and political environment, and climate. This has contributed greatly to stagnate and widespread poverty among smallholder farmers in the Eastern Cape. For a sustainable transition from subsistence to smallholder commercial farming, strategies like accumulated social capital through rural farmer groups/cooperatives have been employed by the government and NGOs. These strategies have yielded mixed results of failed and successful farmer groups/cooperatives. Therefore, this study was aimed at establishing the impact of social capital on farmers’ household commercialization level of maize in addition to farm/farmer characteristics. The findings of this study established that smallholders’ average household commercialization index (HCI) of maize was 45%. Household size, crop sales, source of irrigation water, and bonding social capital had a positive and significant impact on HCI of maize while off-farm incomes and social values had a negative and significant impact on the same. Thus, innovation, adoption and use of labour saving technology, improved access to irrigation water and farmers’ access to trainings in relation to strengthening group cohesion are crucial in promoting smallholder commercial farming of maize in the study area.", "title": "" }, { "docid": "10f46999738c0d47ed16326631086933", "text": "We describe JAX, a domain-specific tracing JIT compiler for generating high-performance accelerator code from pure Python and Numpy machine learning programs. JAX uses the XLA compiler infrastructure to generate optimized code for the program subroutines that are most favorable for acceleration, and these optimized subroutines can be called and orchestrated by arbitrary Python. Because the system is fully compatible with Autograd, it allows forwardand reverse-mode automatic differentiation of Python functions to arbitrary order. Because JAX supports structured control flow, it can generate code for sophisticated machine learning algorithms while maintaining high performance. We show that by combining JAX with Autograd and Numpy we get an easily programmable and highly performant ML system that targets CPUs, GPUs, and TPUs, capable of scaling to multi-core Cloud TPUs.", "title": "" }, { "docid": "9332c32039cf782d19367a9515768e42", "text": "Maternal drug use during pregnancy is associated with fetal passive addiction and neonatal withdrawal syndrome. Cigarette smoking—highly prevalent during pregnancy—is associated with addiction and withdrawal syndrome in adults. We conducted a prospective, two-group parallel study on 17 consecutive newborns of heavy-smoking mothers and 16 newborns of nonsmoking, unexposed mothers (controls). Neurologic examinations were repeated at days 1, 2, and 5. Finnegan withdrawal score was assessed every 3 h during their first 4 d. Newborns of smoking mothers had significant levels of cotinine in the cord blood (85.8 ± 3.4 ng/mL), whereas none of the controls had detectable levels. Similar findings were observed with urinary cotinine concentrations in the newborns (483.1 ± 2.5 μg/g creatinine versus 43.6 ± 1.5 μg/g creatinine; p = 0.0001). Neurologic scores were significantly lower in newborns of smokers than in control infants at days 1 (22.3 ± 2.3 versus 26.5 ± 1.1; p = 0.0001), 2 (22.4 ± 3.3 versus 26.3 ± 1.6; p = 0.0002), and 5 (24.3 ± 2.1 versus 26.5 ± 1.5; p = 0.002). Neurologic scores improved significantly from day 1 to 5 in newborns of smokers (p = 0.05), reaching values closer to control infants. Withdrawal scores were higher in newborns of smokers than in control infants at days 1 (4.5 ± 1.1 versus 3.2 ± 1.4; p = 0.05), 2 (4.7 ± 1.7 versus 3.1 ± 1.1; p = 0.002), and 4 (4.7 ± 2.1 versus 2.9 ± 1.4; p = 0.007). Significant correlations were observed between markers of nicotine exposure and neurologic-and withdrawal scores. We conclude that withdrawal symptoms occur in newborns exposed to heavy maternal smoking during pregnancy.", "title": "" }, { "docid": "ec7f20169de673cc14b31e8516937df2", "text": "Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.", "title": "" }, { "docid": "e97c0bbb74534a16c41b4a717eed87d5", "text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.", "title": "" }, { "docid": "840a8befafbf6fc43d19b890431f3953", "text": "The prevalence of high hyperlipemia is increasing around the world. Our aims are to analyze the relationship of triglyceride (TG) and cholesterol (TC) with indexes of liver function and kidney function, and to develop a prediction model of TG, TC in overweight people. A total of 302 adult healthy subjects and 273 overweight subjects were enrolled in this study. The levels of fasting indexes of TG (fs-TG), TC (fs-TC), blood glucose, liver function, and kidney function were measured and analyzed by correlation analysis and multiple linear regression (MRL). The back propagation artificial neural network (BP-ANN) was applied to develop prediction models of fs-TG and fs-TC. The results showed there was significant difference in biochemical indexes between healthy people and overweight people. The correlation analysis showed fs-TG was related to weight, height, blood glucose, and indexes of liver and kidney function; while fs-TC was correlated with age, indexes of liver function (P < 0.01). The MRL analysis indicated regression equations of fs-TG and fs-TC both had statistic significant (P < 0.01) when included independent indexes. The BP-ANN model of fs-TG reached training goal at 59 epoch, while fs-TC model achieved high prediction accuracy after training 1000 epoch. In conclusions, there was high relationship of fs-TG and fs-TC with weight, height, age, blood glucose, indexes of liver function and kidney function. Based on related variables, the indexes of fs-TG and fs-TC can be predicted by BP-ANN models in overweight people.", "title": "" }, { "docid": "98ca25396ccd0e7faf0d00b46a2ab470", "text": "Smart glasses, such as Google Glass, provide always-available displays not offered by console and mobile gaming devices, and could potentially offer a pervasive gaming experience. However, research on input for games on smart glasses has been constrained by the available sensors to date. To help inform design directions, this paper explores user-defined game input for smart glasses beyond the capabilities of current sensors, and focuses on the interaction in public settings. We conducted a user-defined input study with 24 participants, each performing 17 common game control tasks using 3 classes of interaction and 2 form factors of smart glasses, for a total of 2448 trials. Results show that users significantly preferred non-touch and non-handheld interaction over using handheld input devices, such as in-air gestures. Also, for touch input without handheld devices, users preferred interacting with their palms over wearable devices (51% vs 20%). In addition, users preferred interactions that are less noticeable due to concerns with social acceptance, and preferred in-air gestures in front of the torso rather than in front of the face (63% vs 37%).", "title": "" }, { "docid": "35dda21bd1f2c06a446773b0bfff2dd7", "text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.", "title": "" }, { "docid": "557864265ba9fe38bb4d9e4d70e40a06", "text": "Standard word embeddings lack the possibility to distinguish senses of a word by projecting them to exactly one vector. This has a negative effect particularly when computing similarity scores between words using standard vector-based similarity measures such as cosine similarity. We argue that minor senses play an important role in word similarity computations, hence we use an unsupervised sense inventory resource to retrofit monolingual word embeddings, producing sense-aware embeddings. Using retrofitted sense-aware embeddings, we show improved word similarity and relatedness results on multiple word embeddings and multiple established word similarity tasks, sometimes up to an impressive margin of +0.15 Spearman correlation score.", "title": "" }, { "docid": "39ebc7cc1a2cb50fb362804b6ae0f768", "text": "We model a dependency graph as a book, a particular kind of topological space, for semantic dependency parsing. The spine of the book is made up of a sequence of words, and each page contains a subset of noncrossing arcs. To build a semantic graph for a given sentence, we design new Maximum Subgraph algorithms to generate noncrossing graphs on each page, and a Lagrangian Relaxation-based algorithm to combine pages into a book. Experiments demonstrate the effectiveness of the book embedding framework across a wide range of conditions. Our parser obtains comparable results with a state-of-the-art transition-based parser.", "title": "" }, { "docid": "824fbd2fe175b4b179226d249792b87a", "text": "While historically software validation focused on the functional requirements, recent approaches also encompass the validation of quality requirements; for example, system reliability, performance or usability. Application development for mobile platforms opens an additional area of qual i ty-power consumption. In PDAs or mobile phones, power consumption varies depending on the hardware resources used, making it possible to specify and validate correct or incorrect executions. Consider an application that downloads a video stream from the network and displays it on the mobile device's display. In the test scenario the viewing of the video is paused at a certain point. If the specification does not allow video prefetching, the user expects the network card activity to stop when video is paused. How can a test engineer check this expectation? Simply running a test suite or even tracing the software execution does not detect the network activity. However, the extraneous network activity can be detected by power measurements and power model application (Figure 1). Tools to find the power inconsistencies and to validate software from the energy point of view are needed.", "title": "" }, { "docid": "52d2004c762d4701ab275d9757c047fc", "text": "Somatic mosaicism — the presence of genetically distinct populations of somatic cells in a given organism — is frequently masked, but it can also result in major phenotypic changes and reveal the expression of otherwise lethal genetic mutations. Mosaicism can be caused by DNA mutations, epigenetic alterations of DNA, chromosomal abnormalities and the spontaneous reversion of inherited mutations. In this review, we discuss the human disorders that result from somatic mosaicism, as well as the molecular genetic mechanisms by which they arise. Specifically, we emphasize the role of selection in the phenotypic manifestations of mosaicism.", "title": "" } ]
scidocsrr
0a0c808d838625a533a45e76aa07b4c8
A Reflection on Call-by-Value
[ { "docid": "a4e92e4dc5d93aec4414bc650436c522", "text": "Where you can find the compiling with continuations easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this compiling with continuations book. It is about this book that will give wellness for all people from many societies.", "title": "" } ]
[ { "docid": "7c0e4fc967e4a1a3aae97161fae29907", "text": "A crucial step in adding structure to unstructured data is to identify references to entities and disambiguate them. Such disambiguated references can help enhance readability and draw similarities across different pieces of running text in an automated fashion. Previous research has tackled this problem by first forming a catalog of entities from a knowledge base, such as Wikipedia, and then using this catalog to disambiguate references in unseen text. However, most of the previously proposed models either do not use all text in the knowledge base, potentially missing out on discriminative features, or do not exploit word-entity proximity to learn high-quality catalogs. In this work, we propose topic models that keep track of the context of every word in the knowledge base; so that words appearing within the same context as an entity are more likely to be associated with that entity. Thus, our topic models utilize all text present in the knowledge base and help learn high-quality catalogs. Our models also learn groups of co-occurring entities thus enabling collective disambiguation. Unlike most previous topic models, our models are non-parametric and do not require the user to specify the exact number of groups present in the knowledge base. In experiments performed on an extract of Wikipedia containing almost 60,000 references, our models outperform SVM-based baselines by as much as 18% in terms of disambiguation accuracy translating to an increment of almost 11,000 correctly disambiguated references.", "title": "" }, { "docid": "01cf7cb5dd78d5f7754e1c31da9a9eb9", "text": "Today ́s Electronic Industry is changing at a high pace. The root causes are manifold. So world population is growing up to eight billions and gives new challenges in terms of urbanization, mobility and connectivity. Consequently, there will raise up a lot of new business models for the electronic industry. Connectivity will take a large influence on our lives. Concepts like Industry 4.0, internet of things, M2M communication, smart homes or communication in or to cars are growing up. All these applications are based on the same demanding requirement – a high amount of data and increased data transfer rate. These arguments bring up large challenges to the Printed Circuit Board (PCB) design and manufacturing. This paper investigates the impact of different PCB manufacturing technologies and their relation to their high frequency behavior. In the course of the paper a brief overview of PCB manufacturing capabilities is be presented. Moreover, signal losses in terms of frequency, design, manufacturing processes, and substrate materials are investigated. The aim of this paper is, to develop a concept to use materials in combination with optimized PCB manufacturing processes, which allows a significant reduction of losses and increased signal quality. First analysis demonstrate, that for increased signal frequency, demanded by growing data transfer rate, the capabilities to manufacture high frequency PCBs become a key factor in terms of losses. Base materials with particularly high speed properties like very low dielectric constants are used for efficient design of high speed data link lines. Furthermore, copper foils with very low treatment are to be used to minimize loss caused by the skin effect. In addition to the materials composition, the design of high speed circuits is optimized with the help of comprehensive simulations studies. The work on this paper focuses on requirements and main questions arising during the PCB manufacturing process in order to improve the system in terms of losses. For that matter, there are several approaches that can be used. For example, the optimization of the structuring process, the use of efficient interconnection capabilities, and dedicated surface finishing can be used to reduce losses and preserve signal integrity. In this study, a comparison of different PCB manufacturing processes by using measurement results of demonstrators that imitate real PCB applications will be discussed. Special attention has be drawn to the manufacturing capabilities which are optimized for high frequency requirements and focused to avoid signal loss. Different line structures like microstrip lines, coplanar waveguides, and surface integrated waveguides are used for this assessment. This research was carried out by Austria Technologie & Systemtechnik AG (AT&S AG), in cooperation with Vienna University of Technology, Institute of Electrodynamics, Microwave and Circuit Engineering. Introduction Several commercially available PCB fabrication processes exist for manufacturing PCBs. In this paper two methods, pattern plating and panel plating, were utilized for manufacturing the test samples. The first step in both described manufacturing processes is drilling, which allows connections in between different copper layers. The second step for pattern plating (see figure 1) is the flash copper plating process, wherein only a thin copper skin (flash copper) is plated into the drilled holes and over the entire surface. On top of the plated copper a layer of photosensitive etch resist is laminated which is imaged subsequently by ultraviolet (UV) light with a negative film. Negative film imaging is exposing the gaps in between the traces to the UV light. In developing process the non-exposed dry film is removed with a sodium solution. After that, the whole surrounding space is plated with copper and is eventually covered by tin. The tin layer protects the actual circuit pattern during etching. The pattern plating process shows typically a smaller line width tolerance, compared to panel plating, because of a lower copper thickness before etching. The overall process tolerance for narrow dimensions in the order of several tenths of μm is approximately ± 10%. As originally published in the IPC APEX EXPO Conference Proceedings.", "title": "" }, { "docid": "0d7b24e5676281f1e6dae9941f019a7e", "text": "Determining patterns in data is an important and often difficult task for scientists and students. Unfortunately, graphing and analysis software typically is largely inaccessible to users with vision impairment. Using sound to represent data (i.e., sonification or auditory graphs) can make data analysis more accessible; however, there are few guidelines for designing such displays for maximum effectiveness. One crucial yet understudied design issue is exactly how changes in data (e.g., temperature) are mapped onto changes in sound (e.g., pitch), and how this may depend on the specific user. In this study, magnitude estimation was used to determine preferred data-to-display mappings, polarities, and psychophysical scaling functions relating data values to underlying acoustic parameters (frequency, tempo, or modulation index) for blind and visually impaired listeners. The resulting polarities and scaling functions are compared to previous results with sighted participants. There was general agreement about polarities obtained with the two listener populations, with some notable exceptions. There was also evidence for strong similarities regarding the magnitudes of the slopes of the scaling functions, again with some notable differences. For maximum effectiveness, sonification software designers will need to consider carefully their intended users’ vision abilities. Practical implications and limitations are discussed.", "title": "" }, { "docid": "bb361bc0ce796ab9435c281720ce2ae1", "text": "Developers typically rely on the information submitted by end-users to resolve bugs. We conducted a survey on information needs and commonly faced problems with bug reporting among several hundred developers and users of the APACHE, ECLIPSE and MOZILLA projects. In this paper, we present the results of a card sort on the 175 comments sent back to us by the responders of the survey. The card sort revealed several hurdles involved in reporting and resolving bugs, which we present in a collection of recommendations for the design of new bug tracking systems. Such systems could provide contextual assistance, reminders to add information, and most important, assistance to collect and report crucial information to developers.", "title": "" }, { "docid": "eb3f72e91f13a3c6faee53c6d4cd4174", "text": "Recent studies indicate that nearly 75% of queries issued to Web search engines aim at finding information about entities, which are material objects or concepts that exist in the real world or fiction (e.g. people, organizations, products, etc.). Most common information needs underlying this type of queries include finding a certain entity (e.g. “Einstein relativity theory”), a particular attribute or property of an entity (e.g. “Who founded Intel?”) or a list of entities satisfying a certain criteria (e.g. “Formula 1 drivers that won the Monaco Grand Prix”). These information needs can be efficiently addressed by presenting structured information about a target entity or a list of entities retrieved from a knowledge graph either directly as search results or in addition to the ranked list of documents. This tutorial provides a summary of the recent research in knowledge graph entity representation methods and retrieval models. The first part of this tutorial introduces state-of-the-art methods for entity representation, from multi-fielded documents with flat and hierarchical structure to latent dimensional representations based on tensor factorization, while the second part presents recent developments in entity retrieval models, including Fielded Sequential Dependence Model (FSDM) and its parametric extension (PFSDM), as well as entity set expansion and ranking methods.", "title": "" }, { "docid": "183afd3e316e036317da61976939dfa1", "text": "Generative moment matching network (GMMN) is a deep generative model that differs from Generative Adversarial Network (GAN) by replacing the discriminator in GAN with a two-sample test based on kernel maximum mean discrepancy (MMD). Although some theoretical guarantees of MMD have been studied, the empirical performance of GMMN is still not as competitive as that of GAN on challenging and large benchmark datasets. The computational efficiency of GMMN is also less desirable in comparison with GAN, partially due to its requirement for a rather large batch size during the training. In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN. The new approach combines the key ideas in both GMMN and GAN, hence we name it MMD GAN. The new distance measure in MMD GAN is a meaningful loss that enjoys the advantage of weak⇤ topology and can be optimized via gradient descent with relatively small batch sizes. In our evaluation on multiple benchmark datasets, including MNIST, CIFAR-10, CelebA and LSUN, the performance of MMD GAN significantly outperforms GMMN, and is competitive with other representative GAN works.", "title": "" }, { "docid": "c28b48557a4eda0d29200170435f2935", "text": "An important role is reserved for nuclear imaging techniques in the imaging of neuroendocrine tumors (NETs). Somatostatin receptor scintigraphy (SRS) with (111)In-DTPA-octreotide is currently the most important tracer in the diagnosis, staging and selection for peptide receptor radionuclide therapy (PRRT). In the past decade, different positron-emitting tomography (PET) tracers have been developed. The largest group is the (68)Gallium-labeled somatostatin analogs ((68)Ga-SSA). Several studies have demonstrated their superiority compared to SRS in sensitivity and specificity. Furthermore, patient comfort and effective dose are favorable for (68)Ga-SSA. Other PET targets like β-[(11)C]-5-hydroxy-L-tryptophan ((11)C-5-HTP) and 6-(18)F-L-3,4-dihydroxyphenylalanine ((18)F-DOPA) were developed recently. For insulinomas, glucagon-like peptide-1 receptor imaging is a promising new technique. The evaluation of response after PRRT and other therapies is a challenge. Currently, the official follow-up is performed with radiological imaging techniques. The role of nuclear medicine may increase with the newest tracers for PET. In this review, the different nuclear imaging techniques and tracers for the imaging of NETs will be discussed.", "title": "" }, { "docid": "6d3410de121ffe037eafd5f30daa7252", "text": "One of the more important issues in the development of larger scale complex systems (product development period of two or more years) is accommodating changes to requirements. Requirements gathered for larger scale systems evolve during lengthy development periods due to changes in software and business environments, new user needs and technological advancements. Agile methods, which focus on accommodating change even late in the development lifecycle, can be adopted for the development of larger scale systems. However, as currently applied, these practices are not always suitable for the development of such systems. We propose a soft-structured framework combining the principles of agile and conventional software development that addresses the issue of rapidly changing requirements for larger scale systems. The framework consists of two parts: (1) a soft-structured requirements gathering approach that reflects the agile philosophy i.e., the Agile Requirements Generation Model and (2) a tailored development process that can be applied to either small or larger scale systems.", "title": "" }, { "docid": "113a39d674390e6209ddeffac8c6bfbe", "text": "Multi-instance learning (MIL) is widely acknowledged as a fundamental method to solve weakly supervised problems. While MIL is usually effective in standard weakly supervised object recognition tasks, in this paper, we investigate the applicability of MIL on an extreme case of weakly supervised learning on the task of fine-grained visual categorization, in which intra-class variance could be larger than inter-class due to the subtle differences between subordinate categories. For this challenging task, we propose a new method that generalizes the standard multi-instance learning framework, for which a novel multi-task co-localization algorithm is proposed to take advantage of the relationship among fine-grained categories and meanwhile performs as an effective initialization strategy for the non-convex multi-instance objective. The localization results also enable object-level domain-specific fine-tuning of deep neural networks, which significantly boosts the performance. Experimental results on three fine-grained datasets reveal the effectiveness of the proposed method, especially the importance of exploiting inter-class relationships between object categories in weakly supervised fine-grained recognition.", "title": "" }, { "docid": "3d911d6eeefefd16f898200da0e1a3ef", "text": "We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.", "title": "" }, { "docid": "7e047b7c0a0ded44106ce6b50726d092", "text": "Skeleton-based action recognition task is entangled with complex spatio-temporal variations of skeleton joints, and remains challenging for Recurrent Neural Networks (RNNs). In this work, we propose a temporal-then-spatial recalibration scheme to alleviate such complex variations, resulting in an end-to-end Memory Attention Networks (MANs) which consist of a Temporal Attention Recalibration Module (TARM) and a Spatio-Temporal Convolution Module (STCM). Specifically, the TARM is deployed in a residual learning module that employs a novel attention learning network to recalibrate the temporal attention of frames in a skeleton sequence. The STCM treats the attention calibrated skeleton joint sequences as images and leverages the Convolution Neural Networks (CNNs) to further model the spatial and temporal information of skeleton data. These two modules (TARM and STCM) seamlessly form a single network architecture that can be trained in an end-to-end fashion. MANs significantly boost the performance of skeleton-based action recognition and achieve the best results on four challenging benchmark datasets: NTU RGB+D, HDM05, SYSU-3D and UT-Kinect.1", "title": "" }, { "docid": "b3c81ac4411c2461dcec7be210ce809c", "text": "The rapid proliferation of the Internet and the cost-effective growth of its key enabling technologies are revolutionizing information technology and creating unprecedented opportunities for developing largescale distributed applications. At the same time, there is a growing concern over the security of Web-based applications, which are rapidly being deployed over the Internet [4]. For example, e-commerce—the leading Web-based application—is projected to have a market exceeding $1 trillion over the next several years. However, this application has already become a security nightmare for both customers and business enterprises as indicated by the recent episodes involving unauthorized access to credit card information. Other leading Web-based applications with considerable information security and privacy issues include telemedicine-based health-care services and online services or businesses involving both public and private sectors. Many of these applications are supported by workflow management systems (WFMSs) [1]. A large number of public and private enterprises are in the forefront of adopting Internetbased WFMSs and finding ways to improve their services and decision-making processes, hence we are faced with the daunting challenge of ensuring the security and privacy of information in such Web-based applications [4]. Typically, a Web-based application can be represented as a three-tier architecture, depicted in the figure, which includes a Web client, network servers, and a back-end information system supported by a suite of databases. For transaction-oriented applications, such as e-commerce, middleware is usually provided between the network servers and back-end systems to ensure proper interoperability. Considerable security challenges and vulnerabilities exist within each component of this architecture. Existing public-key infrastructures (PKIs) provide encryption mechanisms for ensuring information confidentiality, as well as digital signature techniques for authentication, data integrity and non-repudiation [11]. As no access authorization services are provided in this approach, it has a rather limited scope for Web-based applications. The strong need for information security on the Internet is attributable to several factors, including the massive interconnection of heterogeneous and distributed systems, the availability of high volumes of sensitive information at the end systems maintained by corporations and government agencies, easy distribution of automated malicious software by malfeasors, the ease with which computer crimes can be committed anonymously from across geographic boundaries, and the lack of forensic evidence in computer crimes, which makes the detection and prosecution of criminals extremely difficult. Two classes of services are crucial for a secure Internet infrastructure. These include access control services and communication security services. Access James B.D. Joshi,", "title": "" }, { "docid": "fa8b9a91da04a02c0a878f16f90f4f03", "text": "Customer-driven product design process is critically an important part of concurrent engineering (CE). Many new principles and approaches, such as quality function deployment (QFD) and axiomatic design, have been introduced to help designers identify the relationship between customer requirements and design characteristics. However, identification of customer requirements and evaluation of design alternatives are still heavily reliant on designer’s experience and knowledge. This will affect the efficiency and effectiveness of the customer-driven design process and even make the development of design automation become more difficult. This paper presents a framework that integrates the analytical hierarchy process (AHP) and the technique for order preference by similarity to ideal solution (TOPSIS) to assist designers in identifying customer requirements and design characteristics, and help achieve an effective evaluation of the final design solution. The proposed approach starts with applying the AHP method to evaluate the relative overall importance of customer requirements and design characteristics. The TOPSIS method is then used to perform competitive benchmarking. Finally, a search strategy is employed to set target values for design characteristics of the recommended design alternative. The performance of the proposed approach is illustrated and validated using a personal digital assistant (PDA) design example. The results show that the proposed approach is capable of helping designers to systematically consider relevant design information and effectively determine the key design objectives and optimal conceptual alternatives. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4464ba333313f77e986d4f9a04d5af61", "text": "Despite the recent success of deep learning for many speech processing tasks, single-microphone, speaker-independent speech separation remains challenging for two main reasons. The first reason is the arbitrary order of the target and masker speakers in the mixture permutation problem, and the second is the unknown number of speakers in the mixture output dimension problem. We propose a novel deep learning framework for speech separation that addresses both of these issues. We use a neural network to project the time-frequency representation of the mixture signal into a high-dimensional embedding space. A reference point attractor is created in the embedding space to represent each speaker which is defined as the centroid of the speaker in the embedding space. The time-frequency embeddings of each speaker are then forced to cluster around the corresponding attractor point which is used to determine the time-frequency assignment of the speaker. We propose three methods for finding the attractors for each source in the embedding space and compare their advantages and limitations. The objective function for the network is standard signal reconstruction error which enables end-to-end operation during both training and test phases. We evaluated our system using the Wall Street Journal dataset WSJ0 on two and three speaker mixtures and report comparable or better performance than other state-of-the-art deep learning methods for speech separation.", "title": "" }, { "docid": "754172d978b47f42539c1363c6e4f83f", "text": "When the economy declines, racial minorities are hit the hardest. Although existing explanations for this effect focus on institutional causes, recent psychological findings suggest that scarcity may also alter perceptions of race in ways that exacerbate discrimination. We tested the hypothesis that economic resource scarcity causes decision makers to perceive African Americans as \"Blacker\" and that this visual distortion elicits disparities in the allocation of resources. Studies 1 and 2 demonstrated that scarcity altered perceptions of race, lowering subjects' psychophysical threshold for seeing a mixed-race face as \"Black\" as opposed to \"White.\" In studies 3 and 4, scarcity led subjects to visualize African American faces as darker and more \"stereotypically Black,\" compared with a control condition. When presented to naïve subjects, face representations produced under scarcity elicited smaller allocations than control-condition representations. Together, these findings introduce a novel perceptual account for the proliferation of racial disparities under economic scarcity.", "title": "" }, { "docid": "4ac06b70fc02c83cb676f5c479a0fe93", "text": "We propose a framework that captures the denotational probabilities of words and phrases by embedding them in a vector space, and present a method to induce such an embedding from a dataset of denotational probabilities. We show that our model successfully predicts denotational probabilities for unseen phrases, and that its predictions are useful for textual entailment datasets such as SICK and SNLI.", "title": "" }, { "docid": "5d63c5820cc8035822b86ef5fdaebefd", "text": "As the third most popular social network among millennials, Snapchat is well known for its picture and video messaging system that deletes content after it is viewed. However, the Stories feature of Snapchat offers a different perspective of ephemeral content sharing, with pictures and videos that are available for friends to watch an unlimited number of times for 24 hours. We conduct-ed an in-depth qualitative investigation by interviewing 18 participants and reviewing 14 days of their Stories posts. We identify five themes focused on how participants perceive and use the Stories feature, and apply a Goffmanesque metaphor to our analysis. We relate the Stories medium to other research on self-presentation and identity curation in social media.", "title": "" }, { "docid": "66fc8b47dd186fa17240ee64aadf7ca7", "text": "Posterior reversible encephalopathy syndrome (PRES) is characterized by variable associations of seizure activity, consciousness impairment, headaches, visual abnormalities, nausea/vomiting, and focal neurological signs. The PRES may occur in diverse situations. The findings on neuroimaging in PRES are often symmetric and predominate edema in the white matter of the brain areas perfused by the posterior brain circulation, which is reversible when the underlying cause is treated. We report the case of PRES in normotensive patient with hyponatremia.", "title": "" }, { "docid": "05ffbff1d7a8516ca5d5fdf7a8df791b", "text": "Most existing neural networks for learning graphs address permutation invariance by conceiving of the network as a message passing scheme, where each node sums the feature vectors coming from its neighbors. We argue that this imposes a limitation on their representation power, and instead propose a new general architecture for representing objects consisting of a hierarchy of parts, which we call covariant compositional networks (CCNs). Here, covariance means that the activation of each neuron must transform in a specific way under permutations, similarly to steerability in CNNs. We achieve covariance by making each activation transform according to a tensor representation of the permutation group, and derive the corresponding tensor aggregation rules that each neuron must implement. Experiments show that CCNs can outperform competing methods on standard graph learning benchmarks.", "title": "" } ]
scidocsrr
d98c577fad1ae62fd3895ed2f6ac8d1f
Standardization for evaluating software-defined networking controllers
[ { "docid": "3e066a6f96e74963046c9c24239196b4", "text": "This paper presents an independent comprehensive analysis of the efficiency indexes of popular open source SDN/OpenFlow controllers (NOX, POX, Beacon, Floodlight, MuL, Maestro, Ryu). The analysed indexes include performance, scalability, reliability, and security. For testing purposes we developed the new framework called hcprobe. The test bed and the methodology we used are discussed in detail so that everyone could reproduce our experiments. The result of the evaluation show that modern SDN/OpenFlow controllers are not ready to be used in production and have to be improved in order to increase all above mentioned characteristics.", "title": "" } ]
[ { "docid": "3604f1ef7df6e0c224bd19034d7c0929", "text": "BACKGROUND\nMost individuals at risk for developing cardiovascular disease (CVD) can reduce risk factors through diet and exercise before resorting to drug treatment. The effect of a combination of resistance training with vegetable-based (soy) versus animal-based (whey) protein supplementation on CVD risk reduction has received little study. The study's purpose was to examine the effects of 12 weeks of resistance exercise training with soy versus whey protein supplementation on strength gains, body composition and serum lipid changes in overweight, hyperlipidemic men.\n\n\nMETHODS\nTwenty-eight overweight, male subjects (BMI 25-30) with serum cholesterol >200 mg/dl were randomly divided into 3 groups (placebo (n = 9), and soy (n = 9) or whey (n = 10) supplementation) and participated in supervised resistance training for 12 weeks. Supplements were provided in a double blind fashion.\n\n\nRESULTS\nAll 3 groups had significant gains in strength, averaging 47% in all major muscle groups and significant increases in fat free mass (2.6%), with no difference among groups. Percent body fat and waist-to-hip ratio decreased significantly in all 3 groups an average of 8% and 2%, respectively, with no difference among groups. Total serum cholesterol decreased significantly, again with no difference among groups.\n\n\nCONCLUSION\nParticipation in a 12 week resistance exercise training program significantly increased strength and improved both body composition and serum cholesterol in overweight, hypercholesterolemic men with no added benefit from protein supplementation.", "title": "" }, { "docid": "7f24dc012f65770b391d182c525fdaff", "text": "This paper focuses on the task of knowledge-based question answering (KBQA). KBQA aims to match the questions with the structured semantics in knowledge base. In this paper, we propose a two-stage method. Firstly, we propose a topic entity extraction model (TEEM) to extract topic entities in questions, which does not rely on hand-crafted features or linguistic tools. We extract topic entities in questions with the TEEM and then search the knowledge triples which are related to the topic entities from the knowledge base as the candidate knowledge triples. Then, we apply Deep Structured Semantic Models based on convolutional neural network and bidirectional long short-term memory to match questions and predicates in the candidate knowledge triples. To obtain better training dataset, we use an iterative approach to retrieve the knowledge triples from the knowledge base. The evaluation result shows that our system achieves an AverageF1 measure of 79.57% on test dataset.", "title": "" }, { "docid": "028070222acb092767aadfdd6824d0df", "text": "The autism spectrum disorders (ASDs) are a group of conditions characterized by impairments in reciprocal social interaction and communication, and the presence of restricted and repetitive behaviours. Individuals with an ASD vary greatly in cognitive development, which can range from above average to intellectual disability. Although ASDs are known to be highly heritable (∼90%), the underlying genetic determinants are still largely unknown. Here we analysed the genome-wide characteristics of rare (<1% frequency) copy number variation in ASD using dense genotyping arrays. When comparing 996 ASD individuals of European ancestry to 1,287 matched controls, cases were found to carry a higher global burden of rare, genic copy number variants (CNVs) (1.19 fold, P = 0.012), especially so for loci previously implicated in either ASD and/or intellectual disability (1.69 fold, P = 3.4 × 10-4). Among the CNVs there were numerous de novo and inherited events, sometimes in combination in a given family, implicating many novel ASD genes such as SHANK2, SYNGAP1, DLGAP2 and the X-linked DDX53–PTCHD1 locus. We also discovered an enrichment of CNVs disrupting functional gene sets involved in cellular proliferation, projection and motility, and GTPase/Ras signalling. Our results reveal many new genetic and functional targets in ASD that may lead to final connected pathways.", "title": "" }, { "docid": "5cc3d79d7bd762e8cfd9df658acae3fc", "text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.", "title": "" }, { "docid": "21324c71d70ca79d2f2c7117c759c915", "text": "The wide-spread of social media provides unprecedented sources of written language that can be used to model and infer online demographics. In this paper, we introduce a novel visual text analytics system, DemographicVis, to aid interactive analysis of such demographic information based on user-generated content. Our approach connects categorical data (demographic information) with textual data, allowing users to understand the characteristics of different demographic groups in a transparent and exploratory manner. The modeling and visualization are based on ground truth demographic information collected via a survey conducted on Reddit.com. Detailed user information is taken into our modeling process that connects the demographic groups with features that best describe the distinguishing characteristics of each group. Features including topical and linguistic are generated from the user-generated contents. Such features are then analyzed and ranked based on their ability to predict the users' demographic information. To enable interactive demographic analysis, we introduce a web-based visual interface that presents the relationship of the demographic groups, their topic interests, as well as the predictive power of various features. We present multiple case studies to showcase the utility of our visual analytics approach in exploring and understanding the interests of different demographic groups. We also report results from a comparative evaluation, showing that the DemographicVis is quantitatively superior or competitive and subjectively preferred when compared to a commercial text analysis tool.", "title": "" }, { "docid": "d156813b45cb419d86280ee2947b6cde", "text": "Within the realm of service robotics, researchers have placed a great amount of effort into learning motions and manipulations for task execution by robots. The task of robot learning is very broad, as it involves many tasks such as object detection, action recognition, motion planning, localization, knowledge representation and retrieval, and the intertwining of computer vision and machine learning techniques. In this paper, we focus on how knowledge can be gathered, represented, and reproduced to solve problems as done by researchers in the past decades. We discuss the problems which have existed in robot learning and the solutions, technologies or developments (if any) which have contributed to solving them. Specifically, we look at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics datasets and networks. Within each section, we discuss major breakthroughs and how their methods address present issues in robot learning and manipulation.", "title": "" }, { "docid": "a74880697c58a2c4cb84ef1626344316", "text": "This article provides an overview of contemporary and forward looking inter-cell interference coordination techniques for 4G OFDM systems with a specific emphasis on implementations for LTE. Viable approaches include the use of power control, opportunistic spectrum access, intra and inter-base station interference cancellation, adaptive fractional frequency reuse, spatial antenna techniques such as MIMO and SDMA, and adaptive beamforming, as well as recent innovations in decoding algorithms. The applicability, complexity, and performance gains possible with each of these techniques based on simulations and empirical measurements will be highlighted for specific cellular topologies relevant to LTE macro, pico, and femto deployments for both standalone and overlay networks.", "title": "" }, { "docid": "8165a77b36b7c7dd26e5f8223e2564a7", "text": "A novel design method of a wideband dual-polarized antenna is presented by using shorted dipoles, integrated baluns, and crossed feed lines. Simulation and equivalent circuit analysis of the antenna are given. To validate the design method, an antenna prototype is designed, optimized, fabricated, and measured. Measured results verify that the proposed antenna has an impedance bandwidth of 74.5% (from 1.69 to 3.7 GHz) for VSWR < 1.5 at both ports, and the isolation between the two ports is over 30 dB. Stable gain of 8–8.7 dBi and half-power beamwidth (HPBW) of 65°–70° are obtained for 2G/3G/4G base station frequency bands (1.7–2.7 GHz). Compared to the other reported dual-polarized dipole antennas, the presented antenna achieves wide impedance bandwidth, high port isolation, stable antenna gain, and HPBW with a simple structure and compact size.", "title": "" }, { "docid": "0a625d5f0164f7ed987a96510c1b6092", "text": "We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method maps textual queries and visual features from various regions into a shared space where they are compared for relevance with an inner product. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the recently released VQA [1] dataset, which features free-form human-annotated questions and answers.", "title": "" }, { "docid": "f6362a62b69999bdc3d9f681b68842fc", "text": "Women with breast cancer, whether screen detected or symptomatic, have both mammography and ultrasound for initial imaging assessment. Unlike X-ray or magnetic resonance, which produce an image of the whole breast, ultrasound provides comparatively limited 2D or 3D views located around the lesions. Combining different modalities is an essential task for accurate diagnosis and simulating ultrasound images based on whole breast data could be a way toward correlating different information about the same lesion. Very few studies have dealt with such a simulation framework since the breast undergoes large scale deformation between the prone position of magnetic resonance imaging and the largely supine or lateral position of ultrasound. We present a framework for the realistic simulation of 3D ultrasound images based on prone magnetic resonance images from which a supine position is generated using a biomechanical model. The simulation parameters are derived from a real clinical infrastructure and from transducers that are used for routine scans, leading to highly realistic ultrasound images of any region of the breast.", "title": "" }, { "docid": "70a07b906b31054646cf43eb543ba50c", "text": "1. Cellular and Molecular Research Center, and Neuroscience Department, Tehran University of Medical Sciences, Tehran, Iran 2. Anatomy Department, Tehran University of Medical Science, Tehran, Iran. 3. Physiology Research Center (PRC), Tehran university of Medical Sciences, Tehran, Iran. 4. Institute for Cognitive Science studies (ICSS), Tehran, Iran. 5. Department of Material Science and Engineering, Sharif University of Technology, Tehran, Iran.", "title": "" }, { "docid": "6fb72f68aa41a71ea51b81806d325561", "text": "An important aspect related to the development of face-aging algorithms is the evaluation of the ability of such algorithms to produce accurate age-progressed faces. In most studies reported in the literature, the performance of face-aging systems is established based either on the judgment of human observers or by using machine-based evaluation methods. In this paper we perform an experimental evaluation that aims to assess the applicability of human-based against typical machine based performance evaluation methods. The results of our experiments indicate that machines can be more accurate in determining the performance of face-aging algorithms. Our work aims towards the development of a complete evaluation framework for age progression methodologies.", "title": "" }, { "docid": "aaf6ed732f2cb5ceff714f1d84dac9ed", "text": "Video caption refers to generating a descriptive sentence for a specific short video clip automatically, which has achieved remarkable success recently. However, most of the existing methods focus more on visual information while ignoring the synchronized audio cues. We propose three multimodal deep fusion strategies to maximize the benefits of visual-audio resonance information. The first one explores the impact on cross-modalities feature fusion from low to high order. The second establishes the visual-audio short-term dependency by sharing weights of corresponding front-end networks. The third extends the temporal dependency to long-term through sharing multimodal memory across visual and audio modalities. Extensive experiments have validated the effectiveness of our three cross-modalities fusion strategies on two benchmark datasets, including Microsoft Research Video to Text (MSRVTT) and Microsoft Video Description (MSVD). It is worth mentioning that sharing weight can coordinate visualaudio feature fusion effectively and achieve the state-of-art performance on both BELU and METEOR metrics. Furthermore, we first propose a dynamic multimodal feature fusion framework to deal with the part modalities missing case. Experimental results demonstrate that even in the audio absence mode, we can still obtain comparable results with the aid of the additional audio modality inference module.", "title": "" }, { "docid": "a62c03417176b5751471bad386bbfa08", "text": "Platforms are defined as multisided marketplaces with business models that enable producers and users to create value together by interacting with each other. In recent years, platforms have benefited from the advances of digitalization. Hence, digital platforms continue to triumph, and continue to be attractive for companies, also for startups. In this paper, we first explore the research of platforms compared to digital platforms. We then proceed to analyze digital platforms as business models, in the context of startups looking for business model innovation. Based on interviews conducted at a technology startup event in Finland, we analyzed how 34 startups viewed their business model innovations. Using the 10 sub-constructs from the business model innovation scale by Clauss in 2016, we found out that the idea of business model innovation resonated with startups, as all of them were able to identify the source of their business model innovation. Furthermore, the results indicated the complexity of business model innovation as 79 percent of the respondents explained it with more than one sub-construct. New technology/equipment, new processes and new customers and markets got the most mentions as sources of business model innovation. Overall, the emphasis at startups is on the value creation innovation, with new proposition innovation getting less, and value capture innovation even less emphasis as the source of business model innovation.", "title": "" }, { "docid": "41b3b48c10753600e36a584003eebdd6", "text": "This paper deals with reliability problems of common types of generators in hard conditions. It shows possibilities of construction changes that should increase the machine reliability. This contribution is dedicated to the study of brushless alternator for automotive industry. There are described problems with usage of common types of alternators and main benefits and disadvantages of several types of brushless alternators.", "title": "" }, { "docid": "64cc022ac7052a9c82108c88e06b0bf7", "text": "Influential people have an important role in the process of information diffusion. However, there are several ways to be influential, for example, to be the most popular or the first that adopts a new idea. In this paper we present a methodology to find trendsetters in information networks according to a specific topic of interest. Trendsetters are people that adopt and spread new ideas influencing other people before these ideas become popular. At the same time, not all early adopters are trendsetters because only few of them have the ability of propagating their ideas by their social contacts through word-of-mouth. Differently from other influence measures, a trendsetter is not necessarily popular or famous, but the one whose ideas spread over the graph successfully. Other metrics such as node in-degree or even standard Pagerank focus only in the static topology of the network. We propose a ranking strategy that focuses on the ability of some users to push new ideas that will be successful in the future. To that end, we combine temporal attributes of nodes and edges of the network with a Pagerank based algorithm to find the trendsetters for a given topic. To test our algorithm we conduct innovative experiments over a large Twitter dataset. We show that nodes with high in-degree tend to arrive late for new trends, while users in the top of our ranking tend to be early adopters that also influence their social contacts to adopt the new trend.", "title": "" }, { "docid": "404a662b55baea9402d449fae6192424", "text": "Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.", "title": "" }, { "docid": "1bdb24fb4c85b3aaf8a8e5d71328a920", "text": "BACKGROUND\nHigh-grade intraepithelial neoplasia is known to progress to invasive squamous-cell carcinoma of the anus. There are limited reports on the rate of progression from high-grade intraepithelial neoplasia to anal cancer in HIV-positive men who have sex with men.\n\n\nOBJECTIVES\nThe purpose of this study was to describe in HIV-positive men who have sex with men with perianal high-grade intraepithelial neoplasia the rate of progression to anal cancer and the factors associated with that progression.\n\n\nDESIGN\nThis was a prospective cohort study.\n\n\nSETTINGS\nThe study was conducted at an outpatient clinic at a tertiary care center in Toronto.\n\n\nPATIENTS\nThirty-eight patients with perianal high-grade anal intraepithelial neoplasia were identified among 550 HIV-positive men who have sex with men.\n\n\nINTERVENTION\nAll of the patients had high-resolution anoscopy for symptoms, screening, or surveillance with follow-up monitoring/treatment.\n\n\nMAIN OUTCOME MEASURES\nWe measured the incidence of anal cancer per 100 person-years of follow-up.\n\n\nRESULTS\nSeven (of 38) patients (18.4%) with perianal high-grade intraepithelial neoplasia developed anal cancer. The rate of progression was 6.9 (95% CI, 2.8-14.2) cases of anal cancer per 100 person-years of follow-up. A diagnosis of AIDS, previously treated anal cancer, and loss of integrity of the lesion were associated with progression. Anal bleeding was more than twice as common in patients who progressed to anal cancer.\n\n\nLIMITATIONS\nThere was the potential for selection bias and patients were offered treatment, which may have affected incidence estimates.\n\n\nCONCLUSIONS\nHIV-positive men who have sex with men should be monitored for perianal high-grade intraepithelial neoplasia. Those with high-risk features for the development of anal cancer may need more aggressive therapy.", "title": "" }, { "docid": "62688aa48180943a6fcf73fef154fe75", "text": "Oxidative stress is a phenomenon associated with the pathology of several diseases including atherosclerosis, neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases, cancer, diabetes mellitus, inflammatory diseases, as well as psychiatric disorders or aging process. Oxidative stress is defined as an imbalance between the production of free radicals and reactive metabolites, so called oxidants, and their elimination by protective mechanisms named antioxidative systems. Free radicals and their metabolites prevail over antioxidants. This imbalance leads to damage of important biomolecules and organs with plausible impact on the whole organism. Oxidative and antioxidative processes are associated with electron transfer influencing the redox state of cells and organisms; therefore, oxidative stress is also known as redox stress. At present, the opinion that oxidative stress is not always harmful has been accepted. Depending on its intensity, it can play a role in regulation of other important processes through modulation of signal pathways, influencing synthesis of antioxidant enzymes, repair processes, inflammation, apoptosis and cell proliferation, and thus process of a malignity. Therefore, improper administration of antioxidants can potentially negatively impact biological systems.", "title": "" }, { "docid": "91c792fac981d027ac1f2a2773674b10", "text": "Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.", "title": "" } ]
scidocsrr
5369e0ec52989c5b78e198d934e603b1
DAAL: Deep activation-based attribute learning for action recognition in depth videos
[ { "docid": "695af0109c538ca04acff8600d6604d4", "text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "title": "" }, { "docid": "d529b723bbba3182d02a0104d4418c6d", "text": "Learning the spatial-temporal representation of motion information is crucial to human action recognition. Nevertheless, most of the existing features or descriptors cannot capture motion information effectively, especially for long-term motion. To address this problem, this paper proposes a long-term motion descriptor called sequential deep trajectory descriptor (sDTD). Specifically, we project dense trajectories into two-dimensional planes, and subsequently a CNN-RNN network is employed to learn an effective representation for long-term motion. Unlike the popular two-stream ConvNets, the sDTD stream is introduced into a three-stream framework so as to identify actions from a video sequence. Consequently, this three-stream framework can simultaneously capture static spatial features, short-term motion, and long-term motion in the video. Extensive experiments were conducted on three challenging datasets: KTH, HMDB51, and UCF101. Experimental results show that our method achieves state-of-the-art performance on the KTH and UCF101 datasets, and is comparable to the state-of-the-art methods on the HMDB51 dataset.", "title": "" }, { "docid": "8a19befe72e06f2adaf58a575ac16cdb", "text": "Single modality action recognition on RGB or depth sequences has been extensively explored recently. It is generally accepted that each of these two modalities has different strengths and limitations for the task of action recognition. Therefore, analysis of the RGB+D videos can help us to better study the complementary properties of these two types of modalities and achieve higher levels of performance. In this paper, we propose a new deep autoencoder based shared-specific feature factorization network to separate input multimodal signals into a hierarchy of components. Further, based on the structure of the features, a structured sparsity learning machine is proposed which utilizes mixed norms to apply regularization within components and group selection between them for better classification performance. Our experimental results show the effectiveness of our cross-modality feature analysis framework by achieving state-of-the-art accuracy for action classification on five challenging benchmark datasets.", "title": "" }, { "docid": "d1c84b1131f8cb2abbbb0383c83bc0d2", "text": "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.", "title": "" } ]
[ { "docid": "f7a69acbc2766e990cbd4f3c9b4124d1", "text": "This paper aims at assisting empirical researchers benefit from recent advances in causal inference. The paper stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, and the conditional nature of causal claims inferred from nonexperimental studies. These emphases are illustrated through a brief survey of recent results, including the control of confounding, the assessment of causal effects, the interpretation of counterfactuals, and a symbiosis between counterfactual and graphical methods of analysis.", "title": "" }, { "docid": "a14656cc178eeffb5327c74649fdb456", "text": "White light emitting diode (LED) with high brightness has attracted a lot of attention from both industry and academia for its high efficiency, ease to drive, environmental friendliness, and long lifespan. They become possible applications to replace the incandescent bulbs and fluorescent lamps in residential, industrial and commercial lighting. The realization of this new lighting source requires both tight LED voltage regulation and high power factor as well. This paper proposed a single-stage flyback converter for the LED lighting applications and input power factor correction. A type-II compensator has been inserted in the voltage loop providing sufficient bandwidth and stable phase margin. The flyback converter is controlled with voltage mode pulse width modulation (PWM) and run in discontinuous conduction mode (DCM) so that the inductor current follows the rectified input voltage, resulting in high power factor. A prototype topology of closed-loop, single-stage flyback converter for LED driver circuit designed for an 18W LED lighting source is constructed and tested to verify the theoretical predictions. The measured performance of the LED lighting fixture can achieve a high power factor greater than 0.998 and a low total harmonic distortion less than 5.0%. Experimental results show the functionality of the overall system and prove it to be an effective solution for the new lighting applications.", "title": "" }, { "docid": "bc92aa05e989ead172274b4558aa4443", "text": "A recent video coding standard, called High Efficiency Video Coding (HEVC), adopts two in-loop filters for coding efficiency improvement where the in-loop filtering is done by a de-blocking filter (DF) followed by sample adaptive offset (SAO) filtering. The DF helps improve both coding efficiency and subjective quality without signaling any bit to decoder sides while SAO filtering corrects the quantization errors by sending offset values to decoders. In this paper, we first present a new in-loop filtering technique using convolutional neural networks (CNN), called IFCNN, for coding efficiency and subjective visual quality improvement. The IFCNN does not require signaling bits by using the same trained weights in both encoders and decoder. The proposed IFCNN is trained in two different QP ranges: QR1 from QP = 20 to QP = 29; and QR2 from QP = 30 to QP = 39. In testing, the IFCNN trained in QR1 is applied for the encoding/decoding with QP values less than 30 while the IFCNN trained in QR2 is applied for the case of QP values greater than 29. The experiment results show that the proposed IFCNN outperforms the HEVC reference mode (HM) with average 1.9%-2.8% gain in BD-rate for Low Delay configuration, and average 1.6%-2.6% gain in BD-rate for Random Access configuration with IDR period 16.", "title": "" }, { "docid": "9b430645f7b0da19b2c55d43985259d8", "text": "Research on human spatial memory and navigational ability has recently shown the strong influence of reference systems in spatial memory on the ways spatial information is accessed in navigation and other spatially oriented tasks. One of the main findings can be characterized as a large cognitive cost, both in terms of speed and accuracy that occurs whenever the reference system used to encode spatial information in memory is not aligned with the reference system required by a particular task. In this paper, the role of aligned and misaligned reference systems is discussed in the context of the built environment and modern architecture. The role of architectural design on the perception and mental representation of space by humans is investigated. The navigability and usability of built space is systematically analysed in the light of cognitive theories of spatial and navigational abilities of humans. It is concluded that a building’s navigability and related wayfinding issues can benefit from architectural design that takes into account basic results of spatial cognition research. 1 Wayfinding and Architecture Life takes place in space and humans, like other organisms, have developed adaptive strategies to find their way around their environment. Tasks such as identifying a place or direction, retracing one’s path, or navigating a large-scale space, are essential elements to mobile organisms. Most of these spatial abilities have evolved in natural environments over a very long time, using properties present in nature as cues for spatial orientation and wayfinding. With the rise of complex social structure and culture, humans began to modify their natural environment to better fit their needs. The emergence of primitive dwellings mainly provided shelter, but at the same time allowed builders to create environments whose spatial structure “regulated” the chaotic natural environment. They did this by using basic measurements and geometric relations, such as straight lines, right angles, etc., as the basic elements of design (Le Corbusier, 1931, p. 69ff.) In modern society, most of our lives take place in similar regulated, human-made spatial environments, with paths, tracks, streets, and hallways as the main arteries of human locomotion. Architecture and landscape architecture embody the human effort to structure space in meaningful and useful ways. Architectural design of space has multiple functions. Architecture is designed to satisfy the different representational, functional, aesthetic, and emotional needs of organizations and the people who live or work in these structures. In this chapter, emphasis lies on a specific functional aspect of architectural design: human wayfinding. Many approaches to improving architecture focus on functional issues, like improved ecological design, the creation of improved workplaces, better climate control, lighting conditions, or social meeting areas. Similarly, when focusing on the mobility of humans, the ease of wayfinding within a building can be seen as an essential function of a building’s design (Arthur & Passini, 1992; Passini, 1984). When focusing on wayfinding issues in buildings, cities, and landscapes, the designed spatial environment can be seen as an important tool in achieving a particular goal, e.g., reaching a destination or finding an exit in case of emergency. This view, if taken to a literal extreme, is summarized by Le Corbusier’s (1931) notion of the building as a “machine,” mirroring in architecture the engineering ideals of efficiency and functionality found in airplanes and cars. In the narrow sense of wayfinding, a building thus can be considered of good design if it allows easy and error-free navigation. This view is also adopted by Passini (1984), who states that “although the architecture and the spatial configuration of a building generate the wayfinding problems people have to solve, they are also a wayfinding support system in that they contain the information necessary to solve the problem” (p. 110). Like other problems of engineering, the wayfinding problem in architecture should have one or more solutions that can be evaluated. This view of architecture can be contrasted with the alternative view of architecture as “built philosophy”. According to this latter view, architecture, like art, expresses ideas and cultural progress by shaping the spatial structure of the world – a view which gives consideration to the users as part of the philosophical approach but not necessarily from a usability perspective. Viewing wayfinding within the built environment as a “man-machine-interaction” problem makes clear that good architectural design with respect to navigability needs to take two factors into account. First, the human user comes equipped with particular sensory, perceptual, motoric, and cognitive abilities. Knowledge of these abilities and the limitations of an average user or special user populations thus is a prerequisite for good design. Second, structural, functional, financial, and other design considerations restrict the degrees of freedom architects have in designing usable spaces. In the following sections, we first focus on basic research on human spatial cognition. Even though not all of it is directly applicable to architectural design and wayfinding, it lays the foundation for more specific analyses in part 3 and 4. In part 3, the emphasis is on a specific research question that recently has attracted some attention: the role of environmental structure (e.g., building and street layout) for the selection of a spatial reference frame. In part 4, implications for architectural design are discussed by means of two real-world examples. 2 The human user in wayfinding 2.1 Navigational strategies Finding one’s way in the environment, reaching a destination, or remembering the location of relevant objects are some of the elementary tasks of human activity. Fortunately, human navigators are well equipped with an array of flexible navigational strategies, which usually enable them to master their spatial environment (Allen, 1999). In addition, human navigation can rely on tools that extend human sensory and mnemonic abilities. Most spatial or navigational strategies are so common that they do not occur to us when we perform them. Walking down a hallway we hardly realize that the optical and acoustical flows give us rich information about where we are headed and whether we will collide with other objects (Gibson, 1979). Our perception of other objects already includes physical and social models on how they will move and where they will be once we reach the point where paths might cross. Following a path can consist of following a particular visual texture (e.g., asphalt) or feeling a handrail in the dark by touch. At places where multiple continuing paths are possible, we might have learned to associate the scene with a particular action (e.g., turn left; Schölkopf & Mallot, 1995), or we might try to approximate a heading direction by choosing the path that most closely resembles this direction. When in doubt about our path we might ask another person or consult a map. As is evident from this brief (and not exhaustive) description, navigational strategies and activities are rich in diversity and adaptability (for an overview see Golledge, 1999; Werner, Krieg-Brückner, & Herrmann, 2000), some of which are aided by architectural design and signage (see Arthur & Passini, 1992; Passini, 1984). Despite the large number of different navigational strategies, people still experience problems finding their way or even feel lost momentarily. This feeling of being lost might reflect the lack of a key component of human wayfinding: knowledge about where one is located in an environment – with respect to one’s goal, one’s starting location, or with respect to the global environment one is in. As Lynch put it, “the terror of being lost comes from the necessity that a mobile organism be oriented in its surroundings” (1960, p. 125.) Some wayfinding strategies, like vector navigation, rely heavily on this information. Other strategies, e.g. piloting or path-following, which are based on purely local information can benefit from even vague locational knowledge as a redundant source of information to validate or question navigational decisions (see Werner et al., 2000, for examples.) Proficient signage in buildings, on the other hand, relies on a different strategy. It relieves a user from keeping track of his or her position in space by indicating the correct navigational choice whenever the choice becomes relevant. Keeping track of one’s position during navigation can be done quite easily if access to global landmarks, reference directions, or coordinates is possible. Unfortunately, the built environment often does not allow for simple navigational strategies based on these types of information. Instead, spatial information has to be integrated across multiple places, paths, turns, and extended periods of time (see Poucet, 1993, for an interesting model of how this can be achieved). In the next section we will describe an essential ingredient of this integration – the mental representation of spatial information in memory. 2.2 Alignment effects in spatial memory When observing tourists in an unfamiliar environment, one often notices people frantically turning maps to align the noticeable landmarks depicted in the map with the visible landmarks as seen from the viewpoint of the tourist. This type of behavior indicates a well-established cognitive principle (Levine, Jankovic, & Palij, 1982). Observers more easily comprehend and use information depicted in “You-are-here” (YAH) maps if the up-down direction of the map coincides with the front-back direction of the observer. In this situation, the natural preference of directional mapping of top to front and bottom to back is used, and left and right in the map stay left and right in the depicted world. While th", "title": "" }, { "docid": "6c2a033b374b4318cd94f0a617ec705a", "text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.", "title": "" }, { "docid": "90ba220babb8030d1c400352dfde6473", "text": "Localization and navigation are fundamental issues to autonomous mobile robotics. In the case of the environmental map has been built, the traditional two-dimensional (2D) lidar localization and navigation system can't be matched to the initial position of the robot in dynamic environment and will be unreliable when kidnapping occurs. Moreover, it relies on high-cost lidar for high accuracy and long range. In view of this, the paper presents a low cost navigation system based on a low cost lidar and a cheap webcam. In this approach, 2D-codes are attached to the ceiling, to provide reference points to aid the indoor robot localization. The mobile robot is equipped with webcam pointing to the ceiling to identify 2D-codes. On the other hand, a low-cost 2D laser scanner is applied to build a map in unknown environment and detect obstacles. Adaptive Monte Carlo Localization (AMCL) is implements for lidar positioning, A* and Dynamic Window Approach (DWA) are applied in path planning based on a 2D grid map. The error analysis and experiments has validated the proposed method.", "title": "" }, { "docid": "dbc57902c0655f1bdb3f7dbdcdb6fd5c", "text": "In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for realworld applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior. Key Words—Classification, machine learning, multi-class, sequential learning, progressive learning.", "title": "" }, { "docid": "cebc36cd572740069ab22e8181c405c4", "text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.", "title": "" }, { "docid": "d9f0f36e75c08d2c3097e85d8c2dec36", "text": "Social software solutions in enterprises such as IBM Connections are said to have the potential to support communication and collaboration among employees. However, companies are faced to manage the adoption of such collaborative tools and therefore need to raise the employees’ acceptance and motivation. To solve these problems, developers started to implement Gamification elements in social software tools, which aim to increase users’ motivation. In this research-in-progress paper, we give first insights and critically examine the current market of leading social software solutions to find out which Gamification approaches are implementated in such collaborative tools. Our findings show, that most of the major social collaboration solutions do not offer Gamification features by default, but leave the integration to a various number of third party plug-in vendors. Furthermore we identify a trend in which Gamification solutions majorly focus on rewarding quantitative improvement of work activities, neglecting qualitative performance. Subsequently, current solutions do not match recent findings in research and ignore risks that can lower the employees’ motivation and work performance in the long run.", "title": "" }, { "docid": "d1475e197b300489acedf8c0cbe8f182", "text": "—The publication of IEC 61850-90-1 \" Use of IEC 61850 for the communication between substations \" and the draft of IEC 61850-90-5 \" Use of IEC 61850 to transmit synchrophasor information \" opened the possibility to study IEC 61850 GOOSE Message over WAN not only in the layer 2 (link layer) but also in the layer 3 (network layer) in the OSI model. In this paper we examine different possibilities to make feasible teleprotection in the network layer over WAN sharing the communication channel with automation, management and maintenance convergence services among electrical energy substations.", "title": "" }, { "docid": "d27ed8fd2acd0dad6436b7e98853239d", "text": "a r t i c l e i n f o What are the psychological mechanisms that trigger habits in daily life? Two studies reveal that strong habits are influenced by context cues associated with past performance (e.g., locations) but are relatively unaffected by current goals. Specifically, performance contexts—but not goals—automatically triggered strongly habitual behaviors in memory (Experiment 1) and triggered overt habit performance (Experiment 2). Nonetheless, habits sometimes appear to be linked to goals because people self-perceive their habits to be guided by goals. Furthermore, habits of moderate strength are automatically influenced by goals, yielding a curvilinear, U-shaped relation between habit strength and actual goal influence. Thus, research that taps self-perceptions or moderately strong habits may find habits to be linked to goals. Introduction Having cast off the strictures of behaviorism, psychologists are showing renewed interest in the psychological processes that guide This interest is fueled partly by the recognition that automaticity is not a unitary construct. Hence, different kinds of automatic responses may be triggered and controlled in different ways (Bargh, 1994; Moors & De Houwer, 2006). However, the field has not yet converged on a common understanding of the psychological mechanisms that underlie habits. Habits can be defined as psychological dispositions to repeat past behavior. They are acquired gradually as people repeatedly respond in a recurring context (e.g., performance settings, action sequences, Wood & Neal, 2007, 2009). Most researchers agree that habits often originate in goal pursuit, given that people are likely to repeat actions that are rewarding or yield desired outcomes. In addition, habit strength is a continuum, with habits of weak and moderate strength performed with lower frequency and/or in more variable contexts than strong habits This consensus aside, it remains unclear how goals and context cues influence habit automaticity. Goals are motivational states that (a) define a valued outcome that (b) energizes and directs action (e.g., the goal of getting an A in class energizes late night studying; Förster, Liberman, & Friedman, 2007). In contrast, context cues for habits reflect features of the performance environment in which the response typically occurs (e.g., the college library as a setting for late night studying). Some prior research indicates that habits are activated automatically by goals (e.g., Aarts & Dijksterhuis, 2000), whereas others indicate that habits are activated directly by context cues, with minimal influence of goals In the present experiments, we first test the cognitive associations …", "title": "" }, { "docid": "902655db97a2f00a346ffda3694d01f3", "text": "In this paper, we propose a new pipeline of word embedding for unsegmented languages, called segmentation-free word embedding, which does not require word segmentation as a preprocessing step. Unlike space-delimited languages, unsegmented languages, such as Chinese and Japanese, require word segmentation as a preprocessing step. However, word segmentation, that often requires manually annotated resources, is difficult and expensive, and unavoidable errors in word segmentation affect downstream tasks. To avoid these problems in learning word vectors of unsegmented languages, we consider word co-occurrence statistics over all possible candidates of segmentations based on frequent character n-grams instead of segmented sentences provided by conventional word segmenters. Our experiments of noun category prediction tasks on raw Twitter, Weibo, and Wikipedia corpora show that the proposed method outperforms the conventional approaches that require word segmenters.", "title": "" }, { "docid": "39598533576bdd3fa94df5a6967b9b2d", "text": "Genetic Algorithm (GA) and other Evolutionary Algorithms (EAs) have been successfully applied to solve constrained minimum spanning tree (MST) problems of the communication network design and also have been used extensively in a wide variety of communication network design problems. Choosing an appropriate representation of candidate solutions to the problem is the essential issue for applying GAs to solve real world network design problems, since the encoding and the interaction of the encoding with the crossover and mutation operators have strongly influence on the success of GAs. In this paper, we investigate a new encoding crossover and mutation operators on the performance of GAs to design of minimum spanning tree problem. Based on the performance analysis of these encoding methods in GAs, we improve predecessor-based encoding, in which initialization depends on an underlying random spanning-tree algorithm. The proposed crossover and mutation operators offer locality, heritability, and computational efficiency. We compare with the approach to others that encode candidate spanning trees via the Pr?fer number-based encoding, edge set-based encoding, and demonstrate better results on larger instances for the communication spanning tree design problems. key words: minimum spanning tree (MST), communication network design, genetic algorithm (GA), node-based encoding", "title": "" }, { "docid": "bb253cee8f3b8de7c90e09ef878434f3", "text": "Under most widely-used security mechanisms the programs users run possess more authority than is strictly necessary, with each process typically capable of utilising all of the user’s privileges. Consequently such security mechanisms often fail to protect against contemporary threats, such as previously unknown (‘zero-day’) malware and software vulnerabilities, as processes can misuse a user’s privileges to behave maliciously. Application restrictions and sandboxes can mitigate threats that traditional approaches to access control fail to prevent by limiting the authority granted to each process. This developing field has become an active area of research, and a variety of solutions have been proposed. However, despite the seriousness of the problem and the security advantages these schemes provide, practical obstacles have restricted their adoption. This paper describes the motivation for application restrictions and sandboxes, presenting an indepth review of the literature covering existing systems. This is the most comprehensive review of the field to date. The paper outlines the broad categories of existing application-oriented access control schemes, such as isolation and rule-based schemes, and discusses their limitations. Adoption of these schemes has arguably been impeded by workflow, policy complexity, and usability issues. The paper concludes with a discussion on areas for future work, and points a way forward within this developing field of research with recommendations for usability and abstraction to be considered to a further extent when designing application-oriented access", "title": "" }, { "docid": "aa362363d6e4b48f7d0b50b02f35a8a2", "text": "In this paper, we mainly adopt the voting combination method to implement the incremental learning for SVM. The incremental learning algorithm proposed by this paper has contained two parts in order to tackle different types of incremental learning cases, the first part is to deal with the on-line incremental learning and the second part is to deal with the batch incremental learning. In the final, we make the experiment to verify the validity and efficiency of such algorithm.", "title": "" }, { "docid": "883d79eac056314ae45feca23d79c3e3", "text": "Our life is characterized by the presence of a multitude of interactive devices and smart objects exploited for disparate goals in different contexts of use. Thus, it is impossible for application developers to predict at design time the devices and objects users will exploit, how they will be arranged, and in which situations and for which objectives they will be used. For such reasons, it is important to make end users able to easily and autonomously personalize the behaviour of their Internet of Things applications, so that they can better comply with their specific expectations. In this paper, we present a method and a set of tools that allow end users without programming experience to customize the context-dependent behaviour of their Web applications through the specification of trigger-action rules. The environment is able to support end-user specification of more flexible behaviour than what can be done with existing commercial tools, and it also includes an underlying infrastructure able to detect the possible contextual changes in order to achieve the desired behaviour. The resulting set of tools is able to support the dynamic creation and execution of personalized application versions more suitable for users’ needs in specific contexts of use. Thus, it represents a contribution to obtaining low threshold/high ceiling environments. We also report on an example application in the home automation domain, and a user study that has provided useful positive feedback.", "title": "" }, { "docid": "172835b4eaaf987e93d352177fd583b1", "text": "A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as “or”, “sum” or “max”, on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.", "title": "" }, { "docid": "e1dd2a719d3389a11323c5245cd2b938", "text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.", "title": "" }, { "docid": "bbc802e8653c6ae6cb643acc649de471", "text": "To overcome the power delivery limitations of batteries and energy storage limitations of ultracapacitors, hybrid energy storage systems, which combine the two energy sources, have been proposed. A comprehensive review of the state of the art is presented. In addition, a method of optimizing the operation of a battery/ultracapacitor hybrid energy storage system (HESS) is presented. The goal is to set the state of charge of the ultracapacitor and the battery in a way which ensures that the available power and energy is sufficient to supply the drivetrain. By utilizing an algorithm where the states of charge of both systems are tightly controlled, we allow for the overall system size to reduce since more power is available from a smaller energy storage system", "title": "" }, { "docid": "15d3605a6c7ceadd0216a9f67915dfdf", "text": "Rendu-Osler-Weber disease, or hereditary hemorrhagic telangiectasia (HHT), is an autosomal dominant disease characterized by systemic vascular dysplasia. The prevalence varies and ranges, according to region, from 1/3500 to 1/5000. Data concerning Italy are not available. The diagnosis is based on the following criteria: family history, epistaxis, telangiectases and visceral arteriovenous malformations. The diagnosis is to be considered definite if three criteria are present and suspected if two criteria are present. From September 2000 to March 2002, 100 patients (63 males, 37 females, mean age 45.5 +/- 17.3 years) potentially affected by HHT were evaluated in the HHT Center of the \"Augusto Murri\" Internal Medicine Section at the University of Bari (on a day-hospital or hospitalization basis). The diagnosis of HHT was confirmed in 56 patients and suspected in 10. Magnetic resonance imaging revealed cerebral arteriovenous malformations in 8.5% of patients. In 14.6% of patients contrast echocardiography revealed pulmonary arteriovenous malformations subsequently confirmed at multislice computed tomography in all cases but one. In 48.2% of subjects hepatic vascular malformations were revealed by echo color Doppler ultrasonography, whereas abdominal multislice computed tomography was positive in 63.8% of patients. In 64% of the 25 patients, who underwent endoscopy, gastric telangiectases were found. In 3 out of 6 patients presenting with pulmonary arteriovenous malformations, embolotherapy was performed with success. In our patients, the use of tranexamic acid caused a reduction in the frequency of epistaxis. The future objectives of the HHT Center of Bari are to increase knowledge of the disease, to cooperate with other centers with the aim of increasing the number of patients studied and to avoid the limits of therapeutic and diagnostic protocols of a rare disease such as HHT.", "title": "" } ]
scidocsrr
9b1bf4cb1de34c14992f02862227a5dd
An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data
[ { "docid": "5a7d3bfaae94ee144153369a5d23a0a4", "text": "This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.", "title": "" } ]
[ { "docid": "f45339f8a474c2601953e5e2196e51f6", "text": "Remote health monitoring system has been an interesting topic recently among medical practitioners, engineers as well as IT professionals. However, the application of remote health monitoring system where doctor's can monitor patients' vital signs via web is practically new in Malaysia and other countries. Remote health monitoring system is beneficial to the patients and society where the implementation of such system will save hospital bill, waiting time and reduce traffics in the hospital. The objective of this project is to design and develop body temperature measurement device that can be observe by the doctor in real time as well as history data via internet with an alarm/indication in case of abnormalities. In the proposed health monitoring system, heart rate and body temperature wireless sensors were developed, however this paper only focus on body temperature wireless monitoring system. The temperature sensors will send the readings to a microcontroller using Xbee wireless communication. To send the real-time data to health monitoring database, wireless local area network (WLAN) has been used. Arduino with Ethernet shield based on IEEE 802.11 standard has been used for this purpose. Test results from a group of voluntary shows the real-time temperature reading successfully monitored locally (at home) and remotely (at doctor's computer) and the readings are comparable to commercial thermometer.", "title": "" }, { "docid": "143da39941ecc8fb69e87d611503b9c0", "text": "A dual-core 64b Xeonreg MP processor is implemented in a 65nm 8M process. The 435mm2 die has 1.328B transistors. Each core has two threads and a unified 1MB L2 cache. The 16MB unified, 16-way set-associative L3 cache implements both sleep and shut-off leakage reduction modes", "title": "" }, { "docid": "aff7eb1a7235bf662e3307c4775cdedd", "text": "Iterative life cycle models have become popular in software engineering, e.g. in agile development. In contrast, the waterfall model appears to prevail in manufacturing engineering disciplines. This includes aircraft engineering and Boeing’s project for developing its most recent passenger aircraft, the 777. The paper walks through the phases of the 777’s development and compares this process to iterative development. The comparison suggests two observations: Firstly, the over-all waterfall approach in the 777 project appears to be well-motivated by the physical, manufactured nature of aircraft components such as wings, in addition to safety concerns. Secondly, several iterative elements in the development of the 777 can also be identified. A major source of these is digitalization of development, in particular the use of CAD tools for a process called digital preassembly.", "title": "" }, { "docid": "88cf953ba92b54f89cdecebd4153bee3", "text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.", "title": "" }, { "docid": "b3e1bdd7cfca17782bde698297e191ab", "text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.", "title": "" }, { "docid": "44c0237251d54d6ccccd883bf14c6ff6", "text": "In this paper, we propose a new method for indexing large amounts of point and spatial data in highdimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures is the overlap of the bounding boxes in the directory, which increases with growing dimension. To avoid this problem, we introduce a new organization of the directory which uses a split algorithm minimizing overlap and additionally utilizes the concept of supernodes. The basic idea of overlap-minimizing split and supernodes is to keep the directory as hierarchical as possible, and at the same time to avoid splits in the directory that would result in high overlap. Our experiments show that for high-dimensional data, the X-tree outperforms the well-known R*-tree and the TV-tree by up to two orders of magnitude.", "title": "" }, { "docid": "74c386f9d3bc9bbe747a2186542c1fcf", "text": "Assessment of right ventricular afterload in systolic heart failure seems mandatory as it plays an important role in predicting outcome. The purpose of this study is to estimate pulmonary vascular elastance as a reliable surrogate for right ventricular afterload in systolic heart failure. Forty-two patients with systolic heart failure (ejection fraction <35%) were studied by right heart catheterization. Pulmonary arterial elastance was calculated with three methods: Ea(PV) = (end-systolic pulmonary arterial pressure)/stroke volume; Ea*(PV) = (mean pulmonary arterial pressure - pulmonary capillary wedge pressure)/stroke volume; and PPSV = pulmonary arterial pulse pressure (systolic - diastolic)/stroke volume. These measures were compared with pulmonary vascular resistance ([mean pulmonary arterial pressure - pulmonary capillary wedge pressure]/CO). All estimates of pulmonary vascular elastance were significantly correlated with pulmonary vascular resistance (r=0.772, 0.569, and 0.935 for Ea(PV), Ea*(PV), and PPSV, respectively; P <.001). Pulmonary vascular elastance can easily be estimated by routine right heart catheterization in systolic heart failure and seems promising in assessment of right ventricular afterload.", "title": "" }, { "docid": "6a733448d50fc0dee2e1bdd97d62be73", "text": "The pathological hallmarks of Parkinson’s disease (PD) are marked loss of dopaminergic neurons in the substantia nigra pars compacta (SNc), which causes dopamine depletion in the striatum, and the presence of intracytoplasmic inclusions known as Lewy bodies in the remaining cells. It remains unclear why dopaminergic neuronal cell death and Lewy body formation occur in PD. The pathological changes in PD are seen not only in the SNc but also in the locus coeruleus, pedunculo pontine nucleus, raphe nucleus, dorsal motor nucleus of the vagal nerve, olfactory bulb, parasympathetic as well as sympathetic post-ganglionic neurons, Mynert nucleus, and the cerebral cortex (Braak et al. 2003). Widespread neuropathology in the brainstem and cortical regions are responsible for various motor and non-motor symptoms of PD. Although dopamine replacement therapy improves the functional prognosis of PD, there is currently no treatment that prevents the progression of this disease. Previous studies provided possible evidence that the pathogenesis of PD involves complex interactions between environmental and multiple genetic factors. Exposure to the environmental toxin MPTP was identified as one cause of parkinsonism in 1983 (Langston & Ballard 1983). In addition to MPTP, other environmental toxins, such as the herbicide paraquat and the pesticide rotenone have been shown to contribute to dopaminergic neuronal cell loss and parkinsonism. In contrast, cigarette smoking, caffeine use, and high normal plasma urate levels are associated with lower risk of PD (Hernan et al. 2002). Recently, Braak and coworkers proposed the “Dual Hit” theory, which postulated an unknown pathogen accesses the brain through two pathways, the nose and the gut (Hawkes et al. 2007). Subsequently, a prion-like mechanism might contribute to the propagation of αsynuclein from the peripheral nerve to the central nervous system (Angot et al. 2010). Approximately 5% of patients with clinical features of PD have clear familial etiology. Therefore, genetic factors clearly contribute to the pathogenesis of PD. Over the decade, more than 16 loci and 11 causative genes have been identified, and many studies have shed light on their implication in, not only monogenic, but also sporadic forms of PD. Recent studies revealed that PD-associated genes play important roles in cellular functions, such as mitochondrial functions, the ubiquitin-proteasomal system, autophagy-lysosomal pathway, and membrane trafficking (Hatano et al. 2009). In this chapter, we review the investigations of environmental and genetic factors of PD (Figure 1).", "title": "" }, { "docid": "70cb3fed4ac11ae1fee4e56781c3aed2", "text": "Affordances represent the behavior of objects in terms of the robot's motor and perceptual skills. This type of knowledge plays a crucial role in developmental robotic systems, since it is at the core of many higher level skills such as imitation. In this paper, we propose a general affordance model based on Bayesian networks linking actions, object features and action effects. The network is learnt by the robot through interaction with the surrounding objects. The resulting probabilistic model is able to deal with uncertainty, redundancy and irrelevant information. We evaluate the approach using a real humanoid robot that interacts with objects.", "title": "" }, { "docid": "7917e6a788cedd9f1dcb9c3fa132656e", "text": "The smartphone industry has been one of the fastest growing technological areas in recent years. Naturally, the considerable market share of the Android OS and the diversity of app distribution channels besides the official Google Play Store has attracted the attention of malware authors. To deal with the increasing numbers of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community [8], [24], [25], [27], the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a completely automated, publicly available and comprehensive analysis system for Android applications. ANDRUBIS combines static analysis techniques with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage.", "title": "" }, { "docid": "23ff4a40f9a62c8a26f3cc3f8025113d", "text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.", "title": "" }, { "docid": "d18a2e1811f2d11e88c9ae780a8ede23", "text": "In this paper, we present the design of error-resilient machine learning architectures by employing a distributed machine learning framework referred to as classifier ensemble (CE). CE combines several simple classifiers to obtain a strong one. In contrast, centralized machine learning employs a single complex block. We compare the random forest (RF) and the support vector machine (SVM), which are representative techniques from the CE and centralized frameworks, respectively. Employing the dataset from UCI machine learning repository and architecturallevel error models in a commercial 45 nm CMOS process, it is demonstrated that RF-based architectures are significantly more robust than SVM architectures in presence of timing errors due to process variations in near-threshold voltage (NTV) regions (0.3 V 0.7 V). In particular, the RF architecture exhibits a detection accuracy (Pdet) that varies by 3.2% while maintaining a median Pdet ≥ 0.9 at a gate level delay variation of 28.9% . In comparison, SVM exhibits a Pdet that varies by 16.8%. Additionally, we propose an error weighted voting technique that incorporates the timing error statistics of the NTV circuit fabric to further enhance robustness. Simulation results confirm that the error weighted voting achieves a Pdet that varies by only 1.4%, which is 12× lower compared to SVM.", "title": "" }, { "docid": "bbfe1231795d0885f7d9a993e4c871d3", "text": "The current research tested the hypothesis that making many choices impairs subsequent self-control. Drawing from a limited-resource model of self-regulation and executive function, the authors hypothesized that decision making depletes the same resource used for self-control and active responding. In 4 laboratory studies, some participants made choices among consumer goods or college course options, whereas others thought about the same options without making choices. Making choices led to reduced self-control (i.e., less physical stamina, reduced persistence in the face of failure, more procrastination, and less quality and quantity of arithmetic calculations). A field study then found that reduced self-control was predicted by shoppers' self-reported degree of previous active decision making. Further studies suggested that choosing is more depleting than merely deliberating and forming preferences about options and more depleting than implementing choices made by someone else and that anticipating the choice task as enjoyable can reduce the depleting effect for the first choices but not for many choices.", "title": "" }, { "docid": "4889fcd360ece04b8550ec61e2d91213", "text": "Rare word representation has recently enjoyed a surge of interest, owing to the crucial role that effective handling of infrequent words can play in accurate semantic understanding. However, there is a paucity of reliable benchmarks for evaluation and comparison of these techniques. We show in this paper that the only existing benchmark (the Stanford Rare Word dataset) suffers from low-confidence annotations and limited vocabulary; hence, it does not constitute a solid comparison framework. In order to fill this evaluation gap, we propose CAmbridge Rare word Dataset (CARD-660), an expert-annotated word similarity dataset which provides a highly reliable, yet challenging, benchmark for rare word representation techniques. Through a set of experiments we show that even the best mainstream word embeddings, with millions of words in their vocabularies, are unable to achieve performances higher than 0.43 (Pearson correlation) on the dataset, compared to a human-level upperbound of 0.90. We release the dataset and the annotation materials at https:// pilehvar.github.io/card-660/.", "title": "" }, { "docid": "7c18be7b544d757fe4f497aa78ece26f", "text": "Amplitude calibration of the quartz tuning fork (QTF) sensor includes the measurement of the sensitivity factor (αTF). We propose, AFM based methods (cantilever tracking and z-servo tracking of the QTF's amplitude of vibration) to determine the sensitivity factor of the QTF. The QTF is mounted on a xyz-scanner of the AFM and a soft AFM probe is approached on the apex of a tine of the QTF by driving the z-servo and using the normal deflection voltage (Vtb) of position sensitive detector (PSD) as feedback signal. Once the tip contacts the tine, servo is switched off. QTF is electrically excited with a sinusoidal signal from OC4 (Nanonis) and amplitude of the QTF's output at transimpedance amplifier (Vtf) and amplitude of VTB (Vp) is measured by individual lock-in amplifiers which are internally synchronized to the phase of the excitation signal of the QTF. Before, the measurements optical lever is calibrated. By relating the both voltages (Vp & Vtf), sensitivity factor of the QTF (αTF) is determined. In the second approach, after the tip contacts the tine, the z-servo is switched off firstly, then the feedback signal is switched to Vp and frequency sweep for the QTF, Vtb as well as z-servo are started, instantaneously. To keep the Vp at set-point the feedback control moves the z-servo to track the vibration amplitude of the QTF and thus the distance traveled by the z-servo (Δζ) during sweep is equal to the fork's amplitude of vibration (ΔxTF). αtf is determined by relating Δz and VTF. Both approaches can be non-destructively applied for QTF sensor calibration. AFM imaging of the AFM calibration grating TGZ1 (from NT-MDT Russia) has been performed with a calibrated QTF sensor.", "title": "" }, { "docid": "00549502ab17ccdc5dad6c14a42c73e6", "text": "This paper examined the relationships between the experiences and perceptions of racism and the physical and mental health status of African Americans. The study was based upon thirteen year (1979 to 1992), four wave, national panel data (n = 623) from the National Survey of Black Americans. Personal experiences of racism were found to have both adverse and salubrious immediate and cumulative effects on the physical and mental well-being of African Americans. In 1979-80, reports of poor treatment due to race were inversely related to subjective well-being and positively associated with the number of reported physical health problems. Reports of negative racial encounters over the 13-year period were weakly predictive of poor subjective well-being in 1992. A more general measure of racial beliefs, perceiving that whites want to keep blacks down, was found to be related to poorer physical health in 1979-80, better physical health in 1992, and predicted increased psychological distress, as well as, lower levels of subjective well-being in 1992. In conclusion, the authors suggested future research on possible factors contributing to the relationship between racism and health status among African Americans.", "title": "" }, { "docid": "a88b2916f73dedabceda574f10a93672", "text": "A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty. KEY WORDS—localization, mapping, visual landmarks, mobile robot", "title": "" }, { "docid": "dfb95120d19a363a27d162b598cdcf26", "text": "Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.", "title": "" }, { "docid": "2418cf34f09335d6232193b21ee7ae49", "text": "The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.", "title": "" }, { "docid": "a3fa64c1f6553a46cfd9f88e9a802bb2", "text": "With the increasing use of liquid crystal-based displays in everyday life, led both by the development of new portable electronic devices and the desire to minimize the use of printed paper, Nematic Liquid Crystals [4] (NLCs) are now hugely important industrial materials; and research into ways to engineer more efficient display technologies is crucial. Modern electronic display technology mostly relies on the ability of NLC materials to rotate the plane of polarized light (birefringence). The degree to which they can do this depends on the orientation of the molecules within the liquid crystal, and this in turn is affected by factors such as an applied electric field (the molecules, which are typically long and thin, line up in an applied field), or by boundary effects (a phenomenon known as surface anchoring). Most devices currently available use the former effect: an electric field is applied to control the molecular orientation of a thin film of nematic liquid crystal between crossed polarizers (which are also the electrodes), and this in turn controls the optical effect when light passes through the layer (figure 1). The main disadvantage of this set-up is that the electric field must be applied constantly in order for the display to maintain its configuration – if the field is removed, the molecules of the NLC relax into the unique, stable, field-free state (giving no contrast between pixels, and a monochrome display). This is expensive in terms of power consumption, leading to generally short battery lifetimes. On the other hand, if one could somehow exploit the fact that the bounding surfaces of a cell affect the molecular configuration – the anchoring effect, which can, to a large extent, be controlled by mechanical or chemical treatments [1]– then one might be able to engineer a bistable system, with two (or more) stable field-free states, giving two optically-distinct stable steady states of the device, without any electric field required to sustain them. Power is required only to change the state of the cell from one steady state to the other (and this issue of “switchability”, which can be hard to achieve, is really the challenging part of the design). Such technology is particularly appropriate for LCDs that change only infrequently, e.g. “electronic paper” applications such as e-books, e-newspapers, and so on. Certain technologies for bistable devices already exist; and most use the surface anchoring effect, combined with a clever choice of bounding surface geometry. The goal of this project will be to investigate simpler designs for liquid crystal devices that exhibit bistability. With planar surface topography, but different anchoring conditions at the two bounding surfaces, bistability is possible [2,3]; and a device of this kind should be easier to manufacture. Two different modeling approaches can be taken depending on what design aspect is to be optimized. A simple approach is to study only steady states of the system. Such states will be governed by (nonlinear) ODEs, and stability can be investigated as the electric field strength is varied. In a system with several steady states, loss of stability of one state at a critical field would mean a bifurcation of the solution, and a switch to a different state. Such an analysis could give information about how to achieve switching at low critical fields, for example; or at physically-realistic material parameter values; but would say nothing about how fast the switching might be. Speed of switching would need to be investigated by studying a simple PDE model for the system. We can explore both approaches here, and attempt to come up with some kind of “optimal” design – whatever that means!", "title": "" } ]
scidocsrr
7405ec2ba1a912bb17416baa54b84996
Neural Machine Translation with External Phrase Memory
[ { "docid": "b02992d4ffe592d3afb7efcbdc64a195", "text": "Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Targetside monolingual data plays an important role in boosting fluency for phrasebased statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic backtranslation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English↔German (+2.8–3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish→English (+2.1–3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English→German.", "title": "" }, { "docid": "5e601792447020020aa02ee539b3a2cf", "text": "The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT. In this paper, we give a more systematic treatment by summarizing the relevant source information through a convolutional architecture guided by the target information. With different guiding signals during decoding, our specifically designed convolution+gating architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target language words, are fed to a deep neural network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English translation tasks show that the proposed model can achieve significant improvements over the previous NNJM by up to +1.01 BLEU points on average.", "title": "" } ]
[ { "docid": "268e0e06a23f495cc36958dafaaa045a", "text": "Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one’s experiences—a hallmark of human intelligence from infancy—remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between “hand-engineering” and “end-to-end” learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias—the graph network—which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have also released an open-source software library for building graph networks, with demonstrations of how to use them in practice.", "title": "" }, { "docid": "83fbffec2e727e6ed6be1e02f54e1e47", "text": "Large dc and ac electric currents are often measured by open-loop sensors without a magnetic yoke. A widely used configuration uses a differential magnetic sensor inserted into a hole in a flat busbar. The use of a differential sensor offers the advantage of partial suppression of fields coming from external currents. Hall sensors and AMR sensors are currently used in this application. In this paper, we present a current sensor of this type that uses novel integrated fluxgate sensors, which offer a greater range than magnetoresistors and better stability than Hall sensors. The frequency response of this type of current sensor is limited due to the eddy currents in the solid busbar. We present a novel amphitheater geometry of the hole in the busbar of the sensor, which reduces the frequency dependence from 15% error at 1 kHz to 9%.", "title": "" }, { "docid": "f5b372607a89ea6595683276e48d6dce", "text": "In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.", "title": "" }, { "docid": "7e105b5c8723759de40e91565c251a56", "text": "This paper addresses the problem of RGBD object recognition in real-world applications, where large amounts of annotated training data are typically unavailable. To overcome this problem, we propose a novel, weakly-supervised learning architecture (DCNN-GPC) which combines parametric models (a pair of Deep Convolutional Neural Networks (DCNN) for RGB and D modalities) with non-parametric models (Gaussian Process Classification). Our system is initially trained using a small amount of labeled data, and then automatically propagates labels to large-scale unlabeled data. We first run 3Dbased objectness detection on RGBD videos to acquire many unlabeled object proposals, and then employ DCNN-GPC to label them. As a result, our multi-modal DCNN can be trained end-to-end using only a small amount of human annotation. Finally, our 3D-based objectness detection and multi-modal DCNN are integrated into a real-time detection and recognition pipeline. In our approach, bounding-box annotations are not required and boundary-aware detection is achieved. We also propose a novel way to pretrain a DCNN for the depth modality, by training on virtual depth images projected from CAD models. We pretrain our multi-modal DCNN on public 3D datasets, achieving performance comparable to state-of-the-art methods on Washington RGBS Dataset. We then finetune the network by further training on a small amount of annotated data from our novel dataset of industrial objects (nuclear waste simulants). Our weakly supervised approach has demonstrated to be highly effective in solving a novel RGBD object recognition application which lacks of human annotations.", "title": "" }, { "docid": "7d3950bbd817ddc385014c9091c48b0d", "text": "With the rapid development of ubiquitous computing and mobile communication technologies, the traditional business model will change drastically. As a logical extension of e-commerce and m-commerce, ubiquitous commerce (u-commerce) research and application are currently under transition with a history of numerous tried and failed solutions, and a future of promising but yet uncertain possibilities with potential new technology innovations. At this point of the development, we propose a suitable framework and organize the u-commerce research under the proposed classification scheme. The current situation outlined by the scheme has been addressed by exploratory and early phase studies. We hope the findings of this research will provide useful insights for anyone who is interested in u-commerce. The paper also provides some future directions for research.", "title": "" }, { "docid": "1e176f66a29b6bd3dfce649da1a4db9d", "text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.", "title": "" }, { "docid": "505137d61a0087e054a2cf09c8addb4b", "text": "A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs.", "title": "" }, { "docid": "1b6adeb66afcdd69950c9dfd7cb2e54a", "text": "The vision of the Semantic Web was coined by Tim Berners-Lee almost two decades ago. The idea describes an extension of the existing Web in which “information is given well-defined meaning, better enabling computers and people to work in cooperation” [Berners-Lee et al., 2001]. Semantic annotations in HTML pages are one realization of this vision which was adopted by large numbers of web sites in the last years. Semantic annotations are integrated into the code of HTML pages using one of the three markup languages Microformats, RDFa, or Microdata. Major consumers of semantic annotations are the search engine companies Bing, Google, Yahoo!, and Yandex. They use semantic annotations from crawled web pages to enrich the presentation of search results and to complement their knowledge bases. However, outside the large search engine companies, little is known about the deployment of semantic annotations: How many web sites deploy semantic annotations? What are the topics covered by semantic annotations? How detailed are the annotations? Do web sites use semantic annotations correctly? Are semantic annotations useful for others than the search engine companies? And how can semantic annotations be gathered from the Web in that case? The thesis answers these questions by profiling the web-wide deployment of semantic annotations. The topic is approached in three consecutive steps: In the first step, two approaches for extracting semantic annotations from the Web are discussed. The thesis evaluates first the technique of focused crawling for harvesting semantic annotations. Afterward, a framework to extract semantic annotations from existing web crawl corpora is described. The two extraction approaches are then compared for the purpose of analyzing the deployment of semantic annotations in the Web. In the second step, the thesis analyzes the overall and markup language-specific adoption of semantic annotations. This empirical investigation is based on the largest web corpus that is available to the public. Further, the topics covered by deployed semantic annotations and their evolution over time are analyzed. Subsequent studies examine common errors within semantic annotations. In addition, the thesis analyzes the data overlap of the entities that are described by semantic annotations from the same and across different web sites. The third step narrows the focus of the analysis towards use case-specific issues. Based on the requirements of a marketplace, a news aggregator, and a travel portal the thesis empirically examines the utility of semantic annotations for these use cases. Additional experiments analyze the capability of product-related semantic annotations to be integrated into an existing product categorization schema. Especially, the potential of exploiting the diverse category information given by the web sites providing semantic annotations is evaluated.", "title": "" }, { "docid": "1700ee1ba5fef2c9efa9a2b8bfa7d6bd", "text": "This work studies resource allocation in a cloud market through the auction of Virtual Machine (VM) instances. It generalizes the existing literature by introducing combinatorial auctions of heterogeneous VMs, and models dynamic VM provisioning. Social welfare maximization under dynamic resource provisioning is proven NP-hard, and modeled with a linear integer program. An efficient α-approximation algorithm is designed, with α ~ 2.72 in typical scenarios. We then employ this algorithm as a building block for designing a randomized combinatorial auction that is computationally efficient, truthful in expectation, and guarantees the same social welfare approximation factor α. A key technique in the design is to utilize a pair of tailored primal and dual LPs for exploiting the underlying packing structure of the social welfare maximization problem, to decompose its fractional solution into a convex combination of integral solutions. Empirical studies driven by Google Cluster traces verify the efficacy of the randomized auction.", "title": "" }, { "docid": "6757bde927be1bf081ffd95908ebbbf3", "text": "Human action recognition has been studied in many fields including computer vision and sensor networks using inertial sensors. However, there are limitations such as spatial constraints, occlusions in images, sensor unreliability, and the inconvenience of users. In order to solve these problems we suggest a sensor fusion method for human action recognition exploiting RGB images from a single fixed camera and a single wrist mounted inertial sensor. These two different domain information can complement each other to fill the deficiencies that exist in both image based and inertial sensor based human action recognition methods. We propose two convolutional neural network (CNN) based feature extraction networks for image and inertial sensor data and a recurrent neural network (RNN) based classification network with long short term memory (LSTM) units. Training of deep neural networks and testing are done with synchronized images and sensor data collected from five individuals. The proposed method results in better performance compared to single sensor-based methods with an accuracy of 86.9% in cross-validation. We also verify that the proposed algorithm robustly classifies the target action when there are failures in detecting body joints from images.", "title": "" }, { "docid": "0cf5f7521cccd0757be3a50617cf2473", "text": "In 1997, Moody and Wu presented recurrent reinforcement learning (RRL) as a viable machine learning method within algorithmic trading. Subsequent research has shown a degree of controversy with regards to the benefits of incorporating technical indicators in the recurrent reinforcement learning framework. In 1991, Nison introduced Japanese candlesticks to the global research community as an alternative to employing traditional indicators within the technical analysis of financial time series. The literature accumulated over the past two and a half decades of research contains conflicting results with regards to the utility of using Japanese candlestick patterns to exploit inefficiencies in financial time series. In this paper, we combine features based on Japanese candlesticks with recurrent reinforcement learning to produce a high-frequency algorithmic trading system for the E-mini S&P 500 index futures market. Our empirical study shows a statistically significant increase in both return and Sharpe ratio compared to relevant benchmarks, suggesting the existence of exploitable spatio-temporal structure in Japanese candlestick patterns and the ability of recurrent reinforcement learning to detect and take advantage of this structure in a high-frequency equity index futures trading environment.", "title": "" }, { "docid": "6fc86c662db76c22e708c5091af6a0da", "text": "Liver hemangiomas are the most common benign liver tumors and are usually incidental findings. Liver hemangiomas are readily demonstrated by abdominal ultrasonography, computed tomography or magnetic resonance imaging. Giant liver hemangiomas are defined by a diameter larger than 5 cm. In patients with a giant liver hemangioma, observation is justified in the absence of symptoms. Surgical resection is indicated in patients with abdominal (mechanical) complaints or complications, or when diagnosis remains inconclusive. Enucleation is the preferred surgical method, according to existing literature and our own experience. Spontaneous or traumatic rupture of a giant hepatic hemangioma is rare, however, the mortality rate is high (36-39%). An uncommon complication of a giant hemangioma is disseminated intravascular coagulation (Kasabach-Merritt syndrome); intervention is then required. Herein, the authors provide a literature update of the current evidence concerning the management of giant hepatic hemangiomas. In addition, the authors assessed treatment strategies and outcomes in a series of patients with giant liver hemangiomas managed in our department.", "title": "" }, { "docid": "918e7434798ebcfdf075fa93cbffba39", "text": "Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.", "title": "" }, { "docid": "7c0748301936c39166b9f91ba72d92ef", "text": "methods and native methods are considered to be type safe if they do not override a final method. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(abstract, AccessFlags). methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(native, AccessFlags). private methods and static methods are orthogonal to dynamic method dispatch, so they never override other methods (§5.4.5). doesNotOverrideFinalMethod(class('java/lang/Object', L), Method) :isBootstrapLoader(L). doesNotOverrideFinalMethod(Class, Method) :isPrivate(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isStatic(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isNotPrivate(Method, Class), isNotStatic(Method, Class), doesNotOverrideFinalMethodOfSuperclass(Class, Method). doesNotOverrideFinalMethodOfSuperclass(Class, Method) :classSuperClassName(Class, SuperclassName), classDefiningLoader(Class, L), loadedClass(SuperclassName, L, Superclass), classMethods(Superclass, SuperMethodList), finalMethodNotOverridden(Method, Superclass, SuperMethodList). 4.10 Verification of class Files THE CLASS FILE FORMAT 202 final methods that are private and/or static are unusual, as private methods and static methods cannot be overridden per se. Therefore, if a final private method or a final static method is found, it was logically not overridden by another method. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isStatic(Method, Superclass). If a non-final private method or a non-final static method is found, skip over it because it is orthogonal to overriding. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isPrivate(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isStatic(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). THE CLASS FILE FORMAT Verification of class Files 4.10 203 If a non-final, non-private, non-static method is found, then indeed a final method was not overridden. Otherwise, recurse upwards. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isNotStatic(Method, Superclass), isNotPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), notMember(method(_, Name, Descriptor), SuperMethodList), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). 4.10 Verification of class Files THE CLASS FILE FORMAT 204 4.10.1.6 Type Checking Methods with Code Non-abstract, non-native methods are type correct if they have code and the code is type correct. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), methodAttributes(Method, Attributes), notMember(native, AccessFlags), notMember(abstract, AccessFlags), member(attribute('Code', _), Attributes), methodWithCodeIsTypeSafe(Class, Method). A method with code is type safe if it is possible to merge the code and the stack map frames into a single stream such that each stack map frame precedes the instruction it corresponds to, and the merged stream is type correct. The method's exception handlers, if any, must also be legal. methodWithCodeIsTypeSafe(Class, Method) :parseCodeAttribute(Class, Method, FrameSize, MaxStack, ParsedCode, Handlers, StackMap), mergeStackMapAndCode(StackMap, ParsedCode, MergedCode), methodInitialStackFrame(Class, Method, FrameSize, StackFrame, ReturnType), Environment = environment(Class, Method, ReturnType, MergedCode, MaxStack, Handlers), handlersAreLegal(Environment), mergedCodeIsTypeSafe(Environment, MergedCode, StackFrame). THE CLASS FILE FORMAT Verification of class Files 4.10 205 Let us consider exception handlers first. An exception handler is represented by a functor application of the form: handler(Start, End, Target, ClassName) whose arguments are, respectively, the start and end of the range of instructions covered by the handler, the first instruction of the handler code, and the name of the exception class that this handler is designed to handle. An exception handler is legal if its start (Start) is less than its end (End), there exists an instruction whose offset is equal to Start, there exists an instruction whose offset equals End, and the handler's exception class is assignable to the class Throwable. The exception class of a handler is Throwable if the handler's class entry is 0, otherwise it is the class named in the handler. An additional requirement exists for a handler inside an <init> method if one of the instructions covered by the handler is invokespecial of an <init> method. In this case, the fact that a handler is running means the object under construction is likely broken, so it is important that the handler does not swallow the exception and allow the enclosing <init> method to return normally to the caller. Accordingly, the handler is required to either complete abruptly by throwing an exception to the caller of the enclosing <init> method, or to loop forever. 4.10 Verification of class Files THE CLASS FILE FORMAT 206 handlersAreLegal(Environment) :exceptionHandlers(Environment, Handlers), checklist(handlerIsLegal(Environment), Handlers). handlerIsLegal(Environment, Handler) :Handler = handler(Start, End, Target, _), Start < End, allInstructions(Environment, Instructions), member(instruction(Start, _), Instructions), offsetStackFrame(Environment, Target, _), instructionsIncludeEnd(Instructions, End), currentClassLoader(Environment, CurrentLoader), handlerExceptionClass(Handler, ExceptionClass, CurrentLoader), isBootstrapLoader(BL), isAssignable(ExceptionClass, class('java/lang/Throwable', BL)), initHandlerIsLegal(Environment, Handler). instructionsIncludeEnd(Instructions, End) :member(instruction(End, _), Instructions). instructionsIncludeEnd(Instructions, End) :member(endOfCode(End), Instructions). handlerExceptionClass(handler(_, _, _, 0), class('java/lang/Throwable', BL), _) :isBootstrapLoader(BL). handlerExceptionClass(handler(_, _, _, Name), class(Name, L), L) :Name \\= 0. THE CLASS FILE FORMAT Verification of class Files 4.10 207 initHandlerIsLegal(Environment, Handler) :notInitHandler(Environment, Handler). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isNotInit(Method). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method), member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, MethodName, Descriptor), MethodName \\= '<init>'. initHandlerIsLegal(Environment, Handler) :isInitHandler(Environment, Handler), sublist(isApplicableInstruction(Target), Instructions, HandlerInstructions), noAttemptToReturnNormally(HandlerInstructions). isInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method). member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, '<init>', Descriptor). isApplicableInstruction(HandlerStart, instruction(Offset, _)) :Offset >= HandlerStart. noAttemptToReturnNormally(Instructions) :notMember(instruction(_, return), Instructions). noAttemptToReturnNormally(Instructions) :member(instruction(_, athrow), Instructions). 4.10 Verification of class Files THE CLASS FILE FORMAT 208 Let us now turn to the stream of instructions and stack map frames. Merging instructions and stack map frames into a single stream involves four cases: • Merging an empty StackMap and a list of instructions yields the original list of instructions. mergeStackMapAndCode([], CodeList, CodeList). • Given a list of stack map frames beginning with the type state for the instruction at Offset, and a list of instructions beginning at Offset, the merged list is the head of the stack map frame list, followed by the head of the instruction list, followed by the merge of the tails of the two lists. mergeStackMapAndCode([stackMap(Offset, Map) | RestMap], [instruction(Offset, Parse) | RestCode], [stackMap(Offset, Map), instruction(Offset, Parse) | RestMerge]) :mergeStackMapAndCode(RestMap, RestCode, RestMerge). • Otherwise, given a list of stack map frames beginning with the type state for the instruction at OffsetM, and a list of instructions beginning at OffsetP, then, if OffsetP < OffsetM, the merged list consists of the head of the instruction list, followed by the merge of the stack map frame list and the tail of the instruction list. mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], [instruction(OffsetP, Parse) | RestCode], [instruction(OffsetP, Parse) | RestMerge]) :OffsetP < OffsetM, mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], RestCode, RestMerge). • Otherwise, the merge of the two lists is undefined. Since the instruction list has monotonically increasing offsets, the merge of the two lists is not defined unless every stack map frame offset has a corresponding instruction offset and the stack map frames are in monotonically ", "title": "" }, { "docid": "34814e0654d52bdea8dce526eeb6e6ea", "text": "Nearest Neighbor Queries Revisited", "title": "" }, { "docid": "27757700ae02079e873b3ca5b3710dae", "text": "The MPI Bioinformatics Toolkit is an interactive web service which offers access to a great variety of public and in-house bioinformatics tools. They are grouped into different sections that support sequence searches, multiple alignment, secondary and tertiary structure prediction and classification. Several public tools are offered in customized versions that extend their functionality. For example, PSI-BLAST can be run against regularly updated standard databases, customized user databases or selectable sets of genomes. Another tool, Quick2D, integrates the results of various secondary structure, transmembrane and disorder prediction programs into one view. The Toolkit provides a friendly and intuitive user interface with an online help facility. As a key feature, various tools are interconnected so that the results of one tool can be forwarded to other tools. One could run PSI-BLAST, parse out a multiple alignment of selected hits and send the results to a cluster analysis tool. The Toolkit framework and the tools developed in-house will be packaged and freely available under the GNU Lesser General Public Licence (LGPL). The Toolkit can be accessed at http://toolkit.tuebingen.mpg.de.", "title": "" }, { "docid": "318aa0dab44cca5919100033aa692cd9", "text": "Text classification is one of the important research issues in the field of text mining, where the documents are classified with supervised knowledge. In literature we can find many text representation schemes and classifiers/learning algorithms used to classify text documents to the predefined categories. In this paper, we present various text representation schemes and compare different classifiers used to classify text documents to the predefined classes. The existing methods are compared and contrasted based on qualitative parameters viz., criteria used for classification, algorithms adopted and classification time complexities.", "title": "" }, { "docid": "639729ba7b21f8b73e6dc363fe0f217f", "text": "Various magnetic nanoparticles have been extensively investigated as novel magnetic resonance imaging (MRI) contrast agents owing to their unique characteristics, including efficient contrast effects, biocompatibility, and versatile surface functionalization capability. Nanoparticles with high relaxivity are very desirable because they would increase the accuracy of MRI. Recent progress in nanotechnology enables fine control of the size, crystal structure, and surface properties of iron oxide nanoparticles. In this tutorial review, we discuss how MRI contrast effects can be improved by controlling the size, composition, doping, assembly, and surface properties of iron-oxide-based nanoparticles.", "title": "" }, { "docid": "707b75a5fa5e796c18bcaf17cd43075d", "text": "This paper presents a new feedback control strategy for balancing individual DC capacitor voltages in a three-phase cascade multilevel inverter-based static synchronous compensator. The design of the control strategy is based on the detailed small-signal model. The key part of the proposed controller is a compensator to cancel the variation parts in the model. The controller can balance individual DC capacitor voltages when H-bridges run with different switching patterns and have parameter variations. It has two advantages: 1) the controller can work well in all operation modes (the capacitive mode, the inductive mode, and the standby mode) and 2) the impact of the individual DC voltage controller on the voltage quality is small. Simulation results and experimental results verify the performance of the controller.", "title": "" }, { "docid": "4830c447cb27d5ad1696bb25ce8c89fd", "text": "For a grid-connected converter with an LCL filter, the harmonic compensators of a proportional-resonant (PR) controller are usually limited to several low-order current harmonics due to system instability when the compensated frequency is out of the bandwidth of the system control loop. In this paper, a new current feedback method for PR current control is proposed. The weighted average value of the currents flowing through the two inductors of the LCL filter is used as the feedback to the current PR regulator. Consequently, the control system with the LCL filter is degraded from a third-order function to a first-order one. A large proportional control-loop gain can be chosen to obtain a wide control-loop bandwidth, and the system can be optimized easily for minimum current harmonic distortions, as well as system stability. The inverter system with the proposed controller is investigated and compared with those using traditional control methods. Experimental results on a 5-kW fuel-cell inverter are provided, and the new current control strategy has been verified.", "title": "" } ]
scidocsrr
e7a83e33fbb12467db1eede17a0cc7d5
Statistical models of appearance for eye tracking and eye-blink detection and measurement
[ { "docid": "f48cc4c9884bac97e50e222776f15413", "text": "An active contour tracker is presented which can be used for gaze-based interaction with off-the-shelf components. The underlying contour model is based on image statistics and avoids explicit feature detection. The tracker combines particle filtering with the EM algorithm. The method exhibits robustness to light changes and camera defocusing; consequently, the model is well suited for use in systems using off-the-shelf hardware, but may equally well be used in controlled environments, such as in IR-based settings. The method is even capable of handling sudden changes between IR and non-IR light conditions, without changing parameters. For the purpose of determining where the user is looking, calibration is usually needed. The number of calibration points used in different methods varies from a few to several thousands, depending on the prior knowledge used on the setup and equipment. We examine basic properties of gaze determination when the geometry of the camera, screen, and user is unknown. In particular we present a lower bound on the number of calibration points needed for gaze determination on planar objects, and we examine degenerate configurations. Based on this lower bound we apply a simple calibration procedure, to facilitate gaze estimation. 2004 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "fae8fc0572bb2bce68b028924af70096", "text": "Wireless LAN (WLAN), despite its popularity, is subject to various security threats. Encrypting the data being transmitted is one of the approaches to address such risks. However, encryption algorithms are known to be computationally intensive and the relation between the strength of encryption and computational intensity is inversely proportional. In this paper, we discuss the challenges in the implementation of encryption algorithm in WLAN. We then compare and analyze the results of experiments to compare these algorithms vis-a-vis their energy consumption. It will be seen that different encryption schemes are fit for different types of messages. We propose an intelligent encryption scheme for optimal security.", "title": "" }, { "docid": "4ee5931bf57096913f7e13e5da0fbe7e", "text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.", "title": "" }, { "docid": "dfdf2581010777e51ff3e29c5b9aee7f", "text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.", "title": "" }, { "docid": "ef77d042a04b7fa704f13a0fa5e73688", "text": "The nature of the cellular basis of learning and memory remains an often-discussed, but elusive problem in neurobiology. A popular model for the physiological mechanisms underlying learning and memory postulates that memories are stored by alterations in the strength of neuronal connections within the appropriate neural circuitry. Thus, an understanding of the cellular and molecular basis of synaptic plasticity will expand our knowledge of the molecular basis of learning and memory. The view that learning was the result of altered synaptic weights was first proposed by Ramon y Cajal in 1911 and formalized by Donald O. Hebb. In 1949, Hebb proposed his \" learning rule, \" which suggested that alterations in the strength of synapses would occur between two neurons when those neurons were active simultaneously (1). Hebb's original postulate focused on the need for synaptic activity to lead to the generation of action potentials in the postsynaptic neuron, although more recent work has extended this to include local depolarization at the synapse. One problem with testing this hypothesis is that it has been difficult to record directly the activity of single synapses in a behaving animal. Thus, the challenge in the field has been to relate changes in synaptic efficacy to specific behavioral instances of associative learning. In this chapter, we will review the relationship among synaptic plasticity, learning, and memory. We will examine the extent to which various current models of neuronal plasticity provide potential bases for memory storage and we will explore some of the signal transduction pathways that are critically important for long-term memory storage. We will focus on two systems—the gill and siphon withdrawal reflex of the invertebrate Aplysia californica and the mammalian hippocam-pus—and discuss the abilities of models of synaptic plasticity and learning to account for a range of genetic, pharmacological, and behavioral data.", "title": "" }, { "docid": "e09594fce400df1297c5c32afac85fee", "text": "Results: Of the 74 ears tested, 45 (61%) had effusion on direct inspection. The effusion was purulent in 8 ears (18%), serous in 9 ears (20%), and mucoid in 28 ears (62%). Ultrasound identified the presence or absence of effusion in 71 cases (96%) (P=.04). Ultrasound distinguished between serous and mucoid effusion with 100% accuracy (P=.04). The probe did not distinguish between mucoid and purulent effusion.", "title": "" }, { "docid": "880aa3de3b839739927cbd82b7abcf8a", "text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.", "title": "" }, { "docid": "924eea928a53d9b9a9f0bfc74c88fac0", "text": "Social network sites (SNS) have rapidly become very popular, challenging even the major portals and search engines in terms of usage and co mmercial value. This chapter introduces key SNS issues and reviews relevant academic resear ch from sociology, communication science, computer science, and information science. The chapter introduces a broad classification of SNS friendship and demonstrates t he range of types of SNS, each with its own unique combination of functionalities and objec tives. The users and uses for SNSs are also varied, both in terms of the broad range of re asons for using a site and also, at the microlevel, in terms of the understanding of the core co ncept of friending. The commonly discussed issues of privacy and security are reviewed, includ ing the extent to which they are taken seriously by users and SNS designers. New forms of electronic communication seem to always generate their own new language varieties an d SNS language is briefly discussed. The chapter is supported by a series of MySpace investi gations to illustrate key points and give additional information. Finally, the potential for p ogrammers to create small applications to run within SNSs or with SNS data is discussed and s peculations made about future developments. Table of contents", "title": "" }, { "docid": "7eb5b730d47da0ee7be8f6c7f4963a2e", "text": "D.T. Lennon†1, H. Moon†1, L.C. Camenzind, Liuqi Yu, D.M. Zumbühl, G.A.D. Briggs, M.A. Osborne, E.A. Laird, and N. Ares Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH, United Kingdom Department of Physics, University of Basel, 4056 Basel, Switzerland Department of Engineering, University of Oxford, Walton Well Road, Oxford OX2 6ED, United Kingdom Department of Physics, Lancaster University, Lancaster, LA1 4YB, United Kingdom (Dated: October 25, 2018)", "title": "" }, { "docid": "969c21b522f0247504d93f23084711c5", "text": "A new approach for high-speed micro-crack detection of solar wafers with variable thickness is proposed. Using a pair of laser displacement sensors, wafer thickness is measured and the lighting intensity is automatically adjusted to compensate for loss in NIR transmission due to varying thickness. In this way, the image contrast is maintained relatively uniform for the entire size of a wafer. An improved version of Niblack segmentation algorithm is developed for this application. Experimental results show the effectiveness of the system when tested with solar wafers with thickness ranging from 125 to 170 μm. Since the inspection is performed on the fly, therefore, a high throughput rate of more than 3600 wafers per hour can easily be obtained. Hence, the proposed system enables rapid in-line monitoring and real-time measurement.", "title": "" }, { "docid": "90d82110c2b10c98c5cb99d68ebb9df3", "text": "Purpose – The purpose of this paper is to investigate the demographic characteristics of small and medium enterprises (SMEs) with regards to their patterns of internet-based information and communications technology (ICT) adoption, taking into account the dimensions of ICT benefits, barriers, and subsequently adoption intention. Design/methodology/approach – A questionnaire-based survey is used to collect data from 406 managers or owners of SMEs in Malaysia. Findings – The results reveal that the SMEs would adopt internet-based ICT regardless of years of business start-up and internet experience. Some significant differences are spotted between manufacturing and service SMEs in terms of their demographic characteristics and internet-based ICT benefits, barriers, and adoption intention. Both the industry types express intention to adopt internet-based ICT, with the service-based SMEs demonstrating greater intention. Research limitations/implications – The paper focuses only on the SMEs in the southern region of Malaysia. Practical implications – The findings offer valuable insights to the SMEs – in particular promoting internet-based ICT adoption for future business success. Originality/value – This paper is perhaps one of the first to comprehensively investigate the relationship between demographic characteristics of SMEs and the various variables affecting their internet-based ICT adoption intention.", "title": "" }, { "docid": "88e72e039de541b00722901a8eff7d19", "text": "When building agents and synthetic characters, and in order to achieve believability, we must consider the emotional relations established between users and characters, that is, we must consider the issue of \"empathy\". Defined in broad terms as \"An observer reacting emotionally because he perceives that another is experiencing or about to experience an emotion\", empathy is an important element to consider in the creation of relations between humans and agents. In this paper we will focus on the role of empathy in the construction of synthetic characters, providing some requirements for such construction and illustrating the presented concepts with a specific system called FearNot!. FearNot! was developed to address the difficult and often devastating problem of bullying in schools. By using role playing and empathic synthetic characters in a 3D environment, FearNot! allows children from 8 to 12 to experience a virtual scenario where they can witness (in a third-person perspective) bullying situations. To build empathy into FearNot! we have considered the following components: agentýs architecture; the charactersý embodiment and emotional expression; proximity with the user and emotionally charged situations.We will describe how these were implemented in FearNot! and report on the preliminary results we have with it.", "title": "" }, { "docid": "4f631769d8267c81ea568c9eed71ac09", "text": "To study a phenomenon scientifically, it must be appropriately described and measured. How mindfulness is conceptualized and assessed has considerable importance for mindfulness science, and perhaps in part because of this, these two issues have been among the most contentious in the field. In recognition of the growing scientific and clinical interest in", "title": "" }, { "docid": "d62c50e109195f483119ebe36350ff54", "text": "We address the problem of inferring users’ interests from microblogging sites such as Twitter, based on their utterances and interactions in the social network. Inferring user interests is important for systems such as search and recommendation engines to provide information that is more attuned to the likes of its users. In this paper, we propose a probabilistic generative model of user utterances that encapsulates both user and network information. This model captures the complex interactions between varied interests of the users, his level of activeness in the network, and the information propagation from the neighbors. As exact probabilistic inference in this model is intractable, we propose an online variational inference algorithm that also takes into account evolving social graph, user and his neighbors? interests. We prove the optimality of the online inference with respect to an equivalent batch update. We present experimental results performed on the actual Twitter users, validating our approach. We also present extensive results showing inadequacy of using Mechanical Turk platform for large scale validation.", "title": "" }, { "docid": "a5274779804272ffc76edfa9b47ef805", "text": "World energy demand is expected to increase due to the expanding urbanization, better living standards and increasing population. At a time when society is becoming increasingly aware of the declining reserves of fossil fuels beside the environmental concerns, it has become apparent that biodiesel is destined to make a substantial contribution to the future energy demands of the domestic and industrial economies. There are different potential feedstocks for biodiesel production. Non-edible vegetable oils which are known as the second generation feedstocks can be considered as promising substitutions for traditional edible food crops for the production of biodiesel. The use of non-edible plant oils is very significant because of the tremendous demand for edible oils as food source. Moreover, edible oils’ feedstock costs are far expensive to be used as fuel. Therefore, production of biodiesel from non-edible oils is an effective way to overcome all the associated problems with edible oils. However, the potential of converting non-edible oil into biodiesel must be well examined. This is because physical and chemical properties of biodiesel produced from any feedstock must comply with the limits of ASTM and DIN EN specifications for biodiesel fuels. This paper introduces non-edible vegetable oils to be used as biodiesel feedstocks. Several aspects related to these feedstocks have been reviewed from various recent publications. These aspects include overview of non-edible oil resources, advantages of non-edible oils, problems in exploitation of non-edible oils, fatty acid composition profiles (FAC) of various non-edible oils, oil extraction techniques, technologies of biodiesel production from non-edible oils, biodiesel standards and characterization, properties and characteristic of non-edible biodiesel and engine performance and emission production. As a conclusion, it has been found that there is a huge chance to produce biodiesel from non-edible oil sources and therefore it can boost the future production of biodiesel. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "943667ea2f62ca74a3daae85262a03ab", "text": "Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection. Most existing AU detection works often treat face alignment as a preprocessing and handle the two tasks independently. In this paper, we propose a novel end-to-end deep learning framework for joint AU detection and face alignment, which has not been explored before. In particular, multi-scale shared features are learned firstly, and highlevel features of face alignment are fed into AU detection. Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively. Finally, the assembled local features are integrated with face alignment features and global features for AU detection. Experiments on BP4D and DISFA benchmarks demonstrate that our framework significantly outperforms the state-of-the-art methods for AU detection.", "title": "" }, { "docid": "077479a268be00930533f4ce8fce2845", "text": "Our research goals are to understand and model the factors that affect trust in intelligent systems across a variety of application domains. In this chapter, we present two methods that can be used to build models of trust for such systems. The first method is the use of surveys, in which large numbers of people are asked to identify and rank factors that would influence their trust of a particular intelligent system. Results from multiple surveys exploring multiple application domains can be used to build a core model of trust and to identify domain specific factors that are needed to modify the core model to improve its accuracy and usefulness. The second method involves conducting experiments where human subjects use the intelligent system, where a variety of factors can be controlled in the studies to explore different factors. Based upon the results of these human subjects experiments, a trust model can be built. These trust models can be used to create design guidelines, to predict initial trust levels before the start of a system’s use, and to measure the evolution of trust over the use of a system. With increased understanding of how to model trust, we can build systems that will be more accepted and used appropriately by target populations.", "title": "" }, { "docid": "f005ebceeac067ffae197fee603ed8c7", "text": "The extended Kalman filter (EKF) is one of the most widely used methods for state estimation with communication and aerospace applications based on its apparent simplicity and tractability (Shi et al., 2002; Bolognani et al., 2003; Wu et al., 2004). However, for an EKF to guarantee satisfactory performance, the system model should be known exactly. Unknown external disturbances may result in the inaccuracy of the state estimate, even cause divergence. This difficulty has been recognized in the literature (Reif & Unbehauen, 1999; Reif et al., 2000), and several schemes have been developed to overcome it. A traditional approach to improve the performance of the filter is the 'covariance setting' technique, where a positive definite estimation error covariance matrix is chosen by the filter designer (Einicke et al., 2003; Bolognani et al., 2003). As it is difficult to manually tune the covariance matrix for dynamic system, adaptive extended Kalman filter (AEKF) approaches for online estimation of the covariance matrix have been adopted (Kim & ILTIS, 2004; Yu et al., 2005; Ahn & Won, 2006). However, only in some special cases, the optimal estimation of the covariance matrix can be obtained. And inaccurate approximation of the covariance matrix may blur the state estimate. Recently, the robust H∞ filter has received considerable attention (Theodor et al., 1994; Shen & Deng, 1999; Zhang et al., 2005; Tseng & Chen, 2001). The robust filters take different forms depending on what kind of disturbances are accounted for, while the general performance criterion of the filters is to guarantee a bounded energy gain from the worst possible disturbance to the estimation error. Although the robust extended Kalman filter (REKF) has been deeply investigated (Einicke & White, 1999; Reif et al., 1999; Seo et al., 2006), how to prescribe the level of disturbances attenuation is still an open problem. In general, the selection of the attenuation level can be seen as a tradeoff between the optimality and the robustness. In other words, the robustness of the REKF is obtained at the expense of optimality. This chapter reviews the adaptive robust extended Kalman filter (AREKF), an effective algorithm which will remain stable in the presence of unknown disturbances, and yield accurate estimates in the absence of disturbances (Xiong et al., 2008). The key idea of the AREKF is to design the estimator based on the stability analysis, and determine whether the error covariance matrix should be reset according to the magnitude of the innovation. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg", "title": "" }, { "docid": "313c8ba6d61a160786760543658185df", "text": "In this review, we collate information about ticks identified in different parts of the Sudan and South Sudan since 1956 in order to identify gaps in tick prevalence and create a map of tick distribution. This will avail basic data for further research on ticks and policies for the control of tick-borne diseases. In this review, we discuss the situation in the Republic of South Sudan as well as Sudan. For this purpose we have divided Sudan into four regions, namely northern Sudan (Northern and River Nile states), central Sudan (Khartoum, Gazera, White Nile, Blue Nile and Sennar states), western Sudan (North and South Kordofan and North, South and West Darfour states) and eastern Sudan (Red Sea, Kassala and Gadarif states).", "title": "" }, { "docid": "ff5b64a4c52fd9436a2d5dbd4db93da7", "text": "As one of the most popular software testing techniques, fuzzing can find a variety of weaknesses in a program, such as software bugs and vulnerabilities, by generating numerous test inputs. Due to its effectiveness, fuzzing is regarded as a valuable bug hunting method. In this paper, we present an overview of fuzzing that concentrates on its general process, as well as classifications, followed by detailed discussion of the key obstacles and some state-of-the-art technologies which aim to overcome or mitigate these obstacles. We further investigate and classify several widely used fuzzing tools. Our primary goal is to equip the stakeholder with a better understanding of fuzzing and the potential solutions for improving fuzzing methods in the spectrum of software testing and security. To inspire future research, we also predict some future directions with regard to fuzzing.", "title": "" }, { "docid": "51e0a26f73fb2cc56286a15c4e15d9cd", "text": "OBJECTIVE\nTo determine the effectiveness of a water flosser in reducing the bleeding on probing (BOP) index around dental implants as compared to flossing.\n\n\nMETHODS AND MATERIALS\nPatients with implants were randomly assigned to one of two groups in this examiner-masked, single-center study. The study compared the efficacy of a manual toothbrush paired with either traditional string floss or a water flosser.\n\n\nRESULTS\nThe primary outcome was the reduction in the incidence of BOP after 30 days. There were no differences in the percent of bleeding sites between the groups at baseline. At 30 days, 18 of the 22 (81.8%) implants in the water flosser group showed a reduction in BOP compared to 6 of the 18 (33.3%) in the floss group (P=0.0018).\n\n\nCONCLUSIONS\nThese results demonstrate that the water flosser group had statistically significantly greater bleeding reduction than the string floss group. The authors concluded that water flossing may be a useful adjuvant for implant hygiene maintenance.", "title": "" } ]
scidocsrr
912e28e6ac67ccba52c59c59f68d9f48
Straight to the Facts: Learning Knowledge Base Retrieval for Factual Visual Question Answering
[ { "docid": "0323cfb6e74e160c44e0922a49ecc28b", "text": "Generating diverse questions for given images is an important task for computational education, entertainment and AI assistants. Different from many conventional prediction techniques is the need for algorithms to generate a diverse set of plausible questions, which we refer to as creativity. In this paper we propose a creative algorithm for visual question generation which combines the advantages of variational autoencoders with long short-term memory networks. We demonstrate that our framework is able to generate a large set of varying questions given a single input image.", "title": "" }, { "docid": "8b998b9f8ea6cfe5f80a5b3a1b87f807", "text": "We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2.", "title": "" }, { "docid": "c0e2d1740bbe2c40e7acf262cb658ea2", "text": "The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset.", "title": "" }, { "docid": "8328b1dd52bcc081548a534dc40167a3", "text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "title": "" }, { "docid": "db806183810547435075eb6edd28d630", "text": "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues.,,We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.", "title": "" }, { "docid": "4337f8c11a71533d38897095e5e6847a", "text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-­‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.", "title": "" } ]
[ { "docid": "8508162ac44f56aaaa9c521e6628b7b2", "text": "Pervasive or ubiquitous computing was developed thanks to the technological evolution of embedded systems and computer communication means. Ubiquitous computing has given birth to the concept of smart spaces that facilitate our daily life and increase our comfort where devices provide proactively adpated services. In spite of the significant previous works done in this domain, there still a lot of work and enhancement to do in particular the taking into account of current user's context when providing adaptable services. In this paper we propose an approach for context-aware services adaptation for a smart living room using two machine learning methods.", "title": "" }, { "docid": "e7230519f0bd45b70c1cbd42f09cb9e8", "text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.", "title": "" }, { "docid": "df2ccac20cdb63038af362ea8950c62d", "text": "Data-intensive applications that operate on large volumes of data have motivated a fresh look at the design of data center networks. The first wave of proposals focused on designing pure packet-switched networks that provide full bisection bandwidth. However, these proposals significantly increase network complexity in terms of the number of links and switches required and the restricted rules to wire them up. On the other hand, optical circuit switching technology holds a very large bandwidth advantage over packet switching technology. This fact motivates us to explore how optical circuit switching technology could benefit a data center network. In particular, we propose a hybrid packet and circuit switched data center network architecture (or HyPaC for short) which augments the traditional hierarchy of packet switches with a high speed, low complexity, rack-to-rack optical circuit-switched network to supply high bandwidth to applications. We discuss the fundamental requirements of this hybrid architecture and their design options. To demonstrate the potential benefits of the hybrid architecture, we have built a prototype system called c-Through. c-Through represents a design point where the responsibility for traffic demand estimation and traffic demultiplexing resides in end hosts, making it compatible with existing packet switches. Our emulation experiments show that the hybrid architecture can provide large benefits to unmodified popular data center applications at a modest scale. Furthermore, our experimental experience provides useful insights on the applicability of the hybrid architecture across a range of deployment scenarios.", "title": "" }, { "docid": "8a607387d2803985d28d386258ba7fae", "text": "based on cross-cultural research. This approach expands earlier theoretical interpretations offered for the significance of cave art that fail to account for central aspects of cave art material. Clottes & Lewis-Williams (1998), Smith (1992) and Ryan (1999) concur in the interpretation that neurologically-based shamanic practices were central to cave art (cf. Lewis-Williams 1997a,b). Clottes & Lewis-Williams suggest that, in spite of the temporal distance, we have better access to Upper Palaeolithic peoples’ religious experiences than other aspects of their lives because of the neuropsychological basis of those experiences. The commonality in the experiences of shamanism across space and time provides a basis for forming ‘some idea of the social and mental context out of which Upper Palaeolithic religion and art came’ (Clottes & LewisMichael Winkelman", "title": "" }, { "docid": "946e5205a93f71e0cfadf58df186ef7e", "text": "Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.", "title": "" }, { "docid": "58c995ef5e6b2b46c1c1d716733f051f", "text": "A printed dipole with an adjustable integrated balun is presented, featuring a broadband performance and flexibility for the matching to different impedance values. As a benchmarking topology, an eight-element linear antenna array is designed and built for base stations used in broadband wireless communications.", "title": "" }, { "docid": "1ae41c6041c8c09277dab4ccbd4c2773", "text": "While the Internet of Things (IoT) technology has been widely recognized as the essential part of Smart Cities, it also brings new challenges in terms of privacy and security. Access control (AC) is among the top security concerns, which is critical in resource and information protection over IoT devices. Traditional access control approaches, like Access Control Lists (ACL), Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC), are not able to provide a scalable, manageable and efficient mechanism to meet the requirements of IoT systems. Another weakness in today’s AC is the centralized authorization server, which can be the performance bottleneck or the single point of failure. Inspired by the smart contract on top of a blockchain protocol, this paper proposes BlendCAC, which is a decentralized, federated capability-based AC mechanism to enable an effective protection for devices, services and information in large scale IoT systems. A federated capability-based delegation model (FCDM) is introduced to support hierarchical and multi-hop delegation. The mechanism for delegate authorization and revocation is explored. A robust identity-based capability token management strategy is proposed, which takes advantage of the smart contract for registering, propagating and revocating of the access authorization. A proof-of-concept prototype has been implemented on both resources-constrained devices (i.e., Raspberry PI node) and more powerful computing devices (i.e., laptops), and tested on a local private blockchain network. The experimental results demonstrate the feasibility of the BlendCAC to offer a decentralized, scalable, lightweight and fine-grained AC solution for IoT systems.", "title": "" }, { "docid": "c1f9456f9479378cd887b3f1c4d15016", "text": "Emerging Internet of Things system utilizes heterogeneous proximity-based ubiquitous resources to provide various real-time multimedia services to mobile application users. In general, such a system relies on distant Cloud services to perform all the data processing tasks, which results in explicit latency. Consequently, Fog computing, which utilizes the proximal computational and networking resources, has arisen. However, utilizing Fog for real-time mobile applications faces the new challenge of ensuring the seamless accessibility of Fog services on the move. This paper proposes a framework for proactive Fog service discovery and process migration using Mobile Ad hoc Social Network in proximity. The proposed framework enables Fog-assisted ubiquitous multimedia service provisioning in proximity without distant Cloud services. A proof-of-concept prototype has been implemented and tested on real devices. Additionally, the proposed Fog service discovery and process migration algorithm have been tested on the ONE simulator.", "title": "" }, { "docid": "b90b7b44971cf93ba343b5dcdd060875", "text": "This paper discusses a general approach to qualitative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator’s control action.", "title": "" }, { "docid": "104b72422962b2fe339eae3616dced0e", "text": "We present an efficient algorithm to compute the intersection of algebraic and NURBS surfaces. Our approach is based on combining the marching methods with the algbraic formulation. In particular, we propose and matrix computations. We present algorithms to compute a start point on each component of the intersection curve (both open and closed components), detect the presence of singularities, and find all the curve branches near the singularity. We also suggest methods to compute the step size during tracing to prevent component jumping. The algorithm runs an order of magnitude faster than previously published robust algorithms. The complexity of the algorithm is output sensitive.", "title": "" }, { "docid": "a2f46b51b65c56acf6768f8e0d3feb79", "text": "In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. Learning Distributed Representations of Concepts using Linear Relational Embedding Alberto Paccanaro Geoffrey Hinton Gatsby Unit", "title": "" }, { "docid": "14d480e4c9256d0ef5e5684860ae4d7f", "text": "Changes in land use and land cover (LULC) as well as climate are likely to affect the geographic distribution of malaria vectors and parasites in the coming decades. At present, malaria transmission is concentrated mainly in the Amazon basin where extensive agriculture, mining, and logging activities have resulted in changes to local and regional hydrology, massive loss of forest cover, and increased contact between malaria vectors and hosts. Employing presence-only records, bioclimatic, topographic, hydrologic, LULC and human population data, we modeled the distribution of malaria and two of its dominant vectors, Anopheles darlingi, and Anopheles nuneztovari s.l. in northern South America using the species distribution modeling platform Maxent. Results from our land change modeling indicate that about 70,000 km2 of forest land would be lost by 2050 and 78,000 km2 by 2070 compared to 2010. The Maxent model predicted zones of relatively high habitat suitability for malaria and the vectors mainly within the Amazon and along coastlines. While areas with malaria are expected to decrease in line with current downward trends, both vectors are predicted to experience range expansions in the future. Elevation, annual precipitation and temperature were influential in all models both current and future. Human population mostly affected An. darlingi distribution while LULC changes influenced An. nuneztovari s.l. distribution. As the region tackles the challenge of malaria elimination, investigations such as this could be useful for planning and management purposes and aid in predicting and addressing potential impediments to elimination.", "title": "" }, { "docid": "a2f5bb20d262b8bab9450ae16cd43abc", "text": "The design and implementation of a high efficiency Class-J power amplifier (PA) for basestation applications is reported. A commercially available 10W GaN HEMT device was used, for which a large-signal model and an extrinsic parasitic model were available. Following Class-J theory, the needed harmonic terminations at the output of the transistor were defined and realised. Experimental results show good agreement with simulations verifying the class of operation. Efficiency above 70% is demonstrated with an output power of 39.7dBm at an input drive of 29dBm. High efficiency is sustained over a bandwidth of 140MHz.", "title": "" }, { "docid": "62e386315d2f4b8ed5ca3bcce71c4e83", "text": "Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents.", "title": "" }, { "docid": "e5dbd27e1dca8f920c416a570d05e94f", "text": "OBJECTIVE\nTo examine the effect of the adapted virtual reality cognitive training program in older adults with chronic schizophrenia.\n\n\nMETHODS\nOlder adults with chronic schizophrenia were recruited from a long-stay care setting and were randomly assigned into intervention (n = 12) and control group (n = 15). The intervention group received 10-session of VR program that consisted of 2 VR activities using IREX. The control group attended the usual programs in the setting.\n\n\nRESULTS\nAfter the 10-session intervention, older adults with chronic schizophrenia preformed significantly better than control in overall cognitive function (p .000), and in two cognitive subscales: repetition (p .001) and memory (p .040). These participants engaged in the VR activities volitionally. No problem of cybersickness was observed.\n\n\nCONCLUSIONS\nThe results of the current study indicate that engaging in the adapted virtual reality cognitive training program offers the potential for significant gains in cognitive function of the older adults with chronic schizophrenia.", "title": "" }, { "docid": "96aa1f19a00226af7b5bbe0bb080582e", "text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.", "title": "" }, { "docid": "799bc245ecfabf59416432ab62fe9320", "text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.", "title": "" }, { "docid": "3f255fa3dcb8b027f1736b30e98254f9", "text": "We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally involving a small move, so this conditional distribution has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is easier to learn because it is easier to approximate its partition function, more like learning to perform supervised function approximation, with gradients that can be obtained by backprop. We provide theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood, along with a definition of an appropriate joint distribution and sampling mechanism even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. We validate these theoretical results with experiments on two image datasets using an architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows training to proceed with simple backprop, without the need for layerwise pretraining.", "title": "" }, { "docid": "490dc6ee9efd084ecf2496b72893a39a", "text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.", "title": "" }, { "docid": "d64d589068d68ef19d7ac77ab55c8318", "text": "Cloud computing is a revolutionary paradigm to deliver computing resources, ranging from data storage/processing to software, as a service over the network, with the benefits of efficient resource utilization and improved manageability. The current popular cloud computing models encompass a cluster of expensive and dedicated machines to provide cloud computing services, incurring significant investment in capital outlay and ongoing costs. A more cost effective solution would be to exploit the capabilities of an ad hoc cloud which consists of a cloud of distributed and dynamically untapped local resources. The ad hoc cloud can be further classified into static and mobile clouds: an ad hoc static cloud harnesses the underutilized computing resources of general purpose machines, whereas an ad hoc mobile cloud harnesses the idle computing resources of mobile devices. However, the dynamic and distributed characteristics of ad hoc cloud introduce challenges in system management. In this article, we propose a generic em autonomic mobile cloud (AMCloud) management framework for automatic and efficient service/resource management of ad hoc cloud in both static and mobile modes. We then discuss in detail the possible security and privacy issues in ad hoc cloud computing. A general security architecture is developed to facilitate the study of prevention and defense approaches toward a secure autonomic cloud system. This article is expected to be useful for exploring future research activities to achieve an autonomic and secure ad hoc cloud computing system.", "title": "" } ]
scidocsrr
ceecd0bdda7f5916200f2659a333cdf1
DemographicVis: Analyzing demographic information based on user generated content
[ { "docid": "deda12e60ddba97be009ce1f24feba7e", "text": "It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.", "title": "" } ]
[ { "docid": "53569b0225db62d4c627e41469cb91b8", "text": "A first proof-of-concept mm-sized implant based on ultrasonic power transfer and RF uplink data transmission is presented. The prototype consists of a 1 mm × 1 mm piezoelectric receiver, a 1 mm × 2 mm chip designed in 65 nm CMOS and a 2.5 mm × 2.5 mm off-chip antenna, and operates through 3 cm of chicken meat which emulates human tissue. The implant supports a DC load power of 100 μW allowing for high-power applications. It also transmits consecutive UWB pulse sequences activated by the ultrasonic downlink data path, demonstrating sufficient power for an Mary PPM transmitter in uplink.", "title": "" }, { "docid": "bbd1e7e579d2543be236a5f69cf42981", "text": "To date, there is almost no work on the use of adverbs in sentiment analysis, nor has there been any work on the use of adverb-adjective combinations (AACs). We propose an AAC-based sentiment analysis technique that uses a linguistic analysis of adverbs of degree. We define a set of general axioms (based on a classification of adverbs of degree into five categories) that all adverb scoring techniques must satisfy. Instead of aggregating scores of both adverbs and adjectives using simple scoring functions, we propose an axiomatic treatment of AACs based on the linguistic classification of adverbs. Three specific AAC scoring methods that satisfy the axioms are presented. We describe the results of experiments on an annotated set of 200 news articles (annotated by 10 students) and compare our algorithms with some existing sentiment analysis algorithms. We show that our results lead to higher accuracy based on Pearson correlation with human subjects.", "title": "" }, { "docid": "12524304546ca59b7e8acb2a7f6d6699", "text": "Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4or 5-option items per testing time while improving content coverage, without detrimental effects on psychometric quality of test scores. Researchers have endorsed 3-option items for over 80 years with empirical evidence—the results of which have been synthesized in an effort to unify this endorsement and encourage its adoption.", "title": "" }, { "docid": "c43b77b56a6e2cb16a6b85815449529d", "text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.", "title": "" }, { "docid": "e63dfda54251b861691d88b8d7f00298", "text": "The negative effects of sleep deprivation on alertness and cognitive performance suggest decreases in brain activity and function, primarily in the thalamus, a subcortical structure involved in alertness and attention, and in the prefrontal cortex, a region subserving alertness, attention, and higher-order cognitive processes. To test this hypothesis, 17 normal subjects were scanned for quantifiable brain activity changes during 85 h of sleep deprivation using positron emission tomography (PET) and (18)Fluorine-2-deoxyglucose ((18)FDG), a marker for regional cerebral metabolic rate for glucose (CMRglu) and neuronal synaptic activity. Subjects were scanned prior to and at 24-h intervals during the sleep deprivation period, for a total of four scans per subject. During each 30 min (18)FDG uptake, subjects performed a sleep deprivation-sensitive Serial Addition/Subtraction task. Polysomnographic monitoring confirmed that subjects were awake. Twenty-four hours of sleep deprivation, reported here, resulted in a significant decrease in global CMRglu, and significant decreases in absolute regional CMRglu in several cortical and subcortical structures. No areas of the brain evidenced a significant increase in absolute regional CMRglu. Significant decreases in relative regional CMRglu, reflecting regional brain reductions greater than the global decrease, occurred predominantly in the thalamus and prefrontal and posterior parietal cortices. Alertness and cognitive performance declined in association with these brain deactivations. This study provides evidence that short-term sleep deprivation produces global decreases in brain activity, with larger reductions in activity in the distributed cortico-thalamic network mediating attention and higher-order cognitive processes, and is complementary to studies demonstrating deactivation of these cortical regions during NREM and REM sleep.", "title": "" }, { "docid": "6a61dc5ea4f3c664f56f0449da181ef4", "text": "In recent times, the study and use of induced pluripotent stem cells (iPSC) have become important in order to avoid the ethical issues surrounding the use of embryonic stem cells. Therapeutic, industrial and research based use of iPSC requires large quantities of cells generated in vitro. Mammalian cells, including pluripotent stem cells, have been expanded using 3D culture, however current limitations have not been overcome to allow a uniform, optimized platform for dynamic culture of pluripotent stem cells to be achieved. In the current work, we have expanded mouse iPSC in a spinner flask using Cytodex 3 microcarriers. We have looked at the effect of agitation on the microcarrier survival and optimized an agitation speed that supports bead suspension and iPS cell expansion without any bead breakage. Under the optimized conditions, the mouse iPSC were able to maintain their growth, pluripotency and differentiation capability. We demonstrate that microcarrier survival and iPS cell expansion in a spinner flask are reliant on a very narrow range of spin rates, highlighting the need for precise control of such set ups and the need for improved design of more robust systems.", "title": "" }, { "docid": "754fb355da63d024e3464b4656ea5e8d", "text": "Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device.", "title": "" }, { "docid": "209203c297898a2251cfd62bdfc37296", "text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.", "title": "" }, { "docid": "5673fc81ba9a1d26531bcf7a1572e873", "text": "Spatio-temporal channel information obtained via channel sounding is invaluable for implementing equalizers, multi-antenna systems, and dynamic modulation schemes in next-generation wireless systems. The most straightforward means of performing channel measurements is in the frequency domain using a vector network analyzer (VNA). However, the high cost of VNAs often leads engineers to seek more economical solutions by measuring the wireless channel in the time domain. The bandwidth compression of the sliding correlator channel sounder makes it the preferred means of performing time-domain channel measurements.", "title": "" }, { "docid": "d4858f49894fcceb15a121a25da4d861", "text": "Remote backup copies of databases are often maintained to ensure availability of data even in the presence of extensive failures, for which local replication mechanisms may be inadequate. We present two versions of an epoch algorithm for maintaining a consistent remote backup copy of a database. The algorithms ensure scalability, which makes them suitable for very large databases. The correctness and the performance of the algorithms are discussed, and an additional application for distributed group commit is given.", "title": "" }, { "docid": "5d91cf986b61bf095c04b68da2bb83d3", "text": "The adeno-associated virus (AAV) vector has been used in preclinical and clinical trials of gene therapy for central nervous system (CNS) diseases. One of the biggest challenges of effectively delivering AAV to the brain is to surmount the blood-brain barrier (BBB). Herein, we identified several potential BBB shuttle peptides that significantly enhanced AAV8 transduction in the brain after a systemic administration, the best of which was the THR peptide. The enhancement of AAV8 brain transduction by THR is dose-dependent, and neurons are the primary THR targets. Mechanism studies revealed that THR directly bound to the AAV8 virion, increasing its ability to cross the endothelial cell barrier. Further experiments showed that binding of THR to the AAV virion did not interfere with AAV8 infection biology, and that THR competitively blocked transferrin from binding to AAV8. Taken together, our results demonstrate, for the first time, that BBB shuttle peptides are able to directly interact with AAV and increase the ability of the AAV vectors to cross the BBB for transduction enhancement in the brain. These results will shed important light on the potential applications of BBB shuttle peptides for enhancing brain transduction with systemic administration of AAV vectors.", "title": "" }, { "docid": "c71635ec5c0ef83c850cab138330f727", "text": "Academic institutions are now drawing attention in finding methods for making effective learning process, for identifying learner’s achievements and weakness, for tracing academic progress and also for predicting future performance. People’s increased expectation for accountability and transparency makes it necessary to implement big data analytics in the educational institution. But not all the educationalist and administrators are ready to take the challenge. So, it is now obvious to know about the necessity and opportunity as well as challenges of implementing big data analytics. This paper will describe the needs, opportunities and challenges of implementing big data analytics in the education sector.", "title": "" }, { "docid": "872d589cd879dee7d88185851b9546ab", "text": "Considering few treatments are available to slow or stop neurodegenerative disorders, such as Alzheimer’s disease and related dementias (ADRD), modifying lifestyle factors to prevent disease onset are recommended. The Voice, Activity, and Location Monitoring system for Alzheimer’s disease (VALMA) is a novel ambulatory sensor system designed to capture natural behaviours across multiple domains to profile lifestyle risk factors related to ADRD. Objective measures of physical activity and sleep are provided by lower limb accelerometry. Audio and GPS location records provide verbal and mobility activity, respectively. Based on a familiar smartphone package, data collection with the system has proven to be feasible in community-dwelling older adults. Objective assessments of everyday activity will impact diagnosis of disease and design of exercise, sleep, and social interventions to prevent and/or slow disease progression.", "title": "" }, { "docid": "ce098e1e022235a2c322a231bff8da6c", "text": "In recent years, due to the development of three-dimensional scanning technology, the opportunities for real objects to be three-dimensionally measured, taken into the PC as point cloud data, and used for various contents are increasing. However, the point cloud data obtained by three-dimensional scanning has many problems such as data loss due to occlusion or the material of the object to be measured, and occurrence of noise. Therefore, it is necessary to edit the point cloud data obtained by scanning. Particularly, since the point cloud data obtained by scanning contains many data missing, it takes much time to fill holes. Therefore, we propose a method to automatically filling hole obtained by three-dimensional scanning. In our method, a surface is generated from a point in the vicinity of a hole, and a hole region is filled by generating a point sequence on the surface. This method is suitable for processing to fill a large number of holes because point sequence interpolation can be performed automatically for hole regions without requiring user input.", "title": "" }, { "docid": "525ddfaae4403392e8817986f2680a68", "text": "Documentation errors increase healthcare costs and cause unnecessary patient deaths. As the standard language for diagnoses and billing, ICD codes serve as the foundation for medical documentation worldwide. Despite the prevalence of electronic medical records, hospitals still witness high levels of ICD miscoding. In this paper, we propose to automatically document ICD codes with far-field speech recognition. Far-field speech occurs when the microphone is located several meters from the source, as is common with smart homes and security systems. Our method combines acoustic signal processing with recurrent neural networks to recognize and document ICD codes in real time. To evaluate our model, we collected a far-field speech dataset of ICD-10 codes and found our model to achieve 87% accuracy with a BLEU score of 85%. By sampling from an unsupervised medical language model, our method is able to outperform existing methods. Overall, this work shows the potential of automatic speech recognition to provide efficient, accurate, and cost-effective healthcare documentation.", "title": "" }, { "docid": "7196b6f6b14827d60f968534d52b4852", "text": "Therapeutic applications of the psychedelics or hallucinogens found cross-culturally involve treatment of a variety of physical, psychological, and social maladies. Modern medicine has similarly found that a range of conditions may be successfully treated with these agents. The ability to treat a wide variety of conditions derives from variation in active ingredients, doses and modes of application, and factors of set and setting manipulated in ritual. Similarities in effects reported cross-culturally reflect biological mechanisms, while success in the treatment of a variety of specific psychological conditions points to the importance of ritual in eliciting their effects. Similar bases involve action on the serotonin and dopamine neurotransmitter systems that can be characterized as psychointegration: an elevation of ancient brain processes. Therapeutic Application of Sacred Medicines in the Premodern and Modern World Societies worldwide have discovered therapeutic applications of psychoactive plants, often referred to as sacred medicines, particularly those called psychedelics or hallucinogens. Hundreds of species of such plants and fungi were used for medicinal and religious purposes (see Schultes et al. 1992; Rätsch 2005), as well as for a variety of psychological and social conditions, culture-bound syndromes, and Thanks to Ilsa Jerome for providing some updated references for this paper. M. J. Winkelman (&) Retired from the School of Human Evolution and Social Change, Arizona State University Tempe Arizona, Caixa Postal 62, Pirenópolis, GO 72980-000, Brazil e-mail: michaeljwinkelman@gmail.com B. C. Labate and C. Cavnar (eds.), The Therapeutic Use of Ayahuasca, DOI: 10.1007/978-3-642-40426-9_1, Springer-Verlag Berlin Heidelberg 2014 1 a range of physical diseases (see Schultes and Winkelman 1996). This review illustrates the range of uses and the diverse potential of these substances for addressing human maladies. The ethnographic data on indigenous uses of these substances, combined with a brief overview of some of the modern medical studies, illustrate that a wide range of effects are obtained with these plants. These cultural therapies involve both pharmacological and ritual manipulations. Highly developed healing traditions selectively utilized different species of the same genus, different preparation methods and doses, varying admixtures, and a variety of ritual and psychotherapeutic processes to obtain specific desired effects. The wide range of uses of these plants suggests that they can contribute new active ingredients for modern medicine, particularly in psychiatry. As was illustrated by our illustrious contributors to Psychedelic Medicine (Winkelman and Roberts 2007a, b), there are a number of areas in which psychedelics have been established in treating what have been considered intractable health problems. While double-blind clinical trials have been sparse (but see Griffiths et al. 2006), this is not due to the lack of evidence for efficacy, but rather the administrative prohibitions that have drastically restricted clinical research. Nonetheless, using the criteria of phases of clinical evaluation, Winkelman and Roberts (2007c) concluded that there is at least Phase II evidence for the effectiveness of most of these psychedelics, supporting the continuation of more advanced trials. Furthermore, their success with the often intractable maladies, ranging from depression and cluster headaches to posttraumatic stress disorder (PTSD), obsessive-compulsive disorders, wasting syndromes, and addictions justifies their immediate use with these desperate patient populations. In addition, the wide variety of therapeutic uses found for these substances in cultures around the world suggest the potential for far greater applications. Therapeutic Uses of Psilocybin-containing ‘‘Magic Mushrooms’’ The Aztecs called these fungi teonanacatl, meaning ‘‘food of the gods’’; there is evidence of the use of psilocybin-containing mushrooms from many different genera in ritual healing practices in cultures around the world and deep in prehistory (see Rätsch 2005). One of the best documented therapeutic uses of psilocybin involves Maria Sabina, the Mazatec ‘‘Wise One’’ (Estrada 1981). Several different Psilocybe species are used by the Mazatec, as well as mushrooms of the Conocybe genera. In addition, other psychoactive plants are also employed, including Salvia divinorum Epl. and tobacco (Nicotiana rustica L., Solanaceae). 1 Phase II studies or trials use small groups of selected patients to determine effectiveness and ideal doses for a specific illness after Phase I trials have established safety (lack of toxicity) and safe dose ranges. 2 M. J. Winkelman", "title": "" }, { "docid": "3906227f9766e1434e33f1d817f99641", "text": "With the advent of large labelled datasets and highcapacity models, the performance of machine vision systems has been improving rapidly. However, the technology has still major limitations, starting from the fact that different vision problems are still solved by different models, trained from scratch or fine-tuned on the target data. The human visual system, in stark contrast, learns a universal representation for vision in the early life of an individual. This representation works well for an enormous variety of vision problems, with little or no change, with the major advantage of requiring little training data to solve any of them. In this paper we investigate whether neural networks may work as universal representations by studying their capacity in relation to the “size” of a large combination of vision problems. We do so by showing that a single neural network can learn simultaneously several very different visual domains (from sketches to planktons and MNIST digits) as well as, or better than, a number of specialized networks. However, we also show that this requires to carefully normalize the information in the network, by using domainspecific scaling factors or, more generically, by using an instance normalization layer.", "title": "" }, { "docid": "bc23df5db0a87c44c944ddf2898db407", "text": "B-trees have been ubiquitous in database management systems for several decades, and they serve in many other storage systems as well. Their basic structure and their basic operations are well understood including search, insertion, and deletion. However, implementation of transactional guarantees such as all-or-nothing failure atomicity and durability in spite of media and system failures seems to be difficult. High-performance techniques such as pseudo-deleted records, allocation-only logging, and transaction processing during crash recovery are widely used in commercial B-tree implementations but not widely understood. This survey collects many of these techniques as a reference for students, researchers, system architects, and software developers. Central in this discussion are physical data independence, separation of logical database contents and physical representation, and the concepts of user transactions and system transactions. Many of the techniques discussed are applicable beyond B-trees.", "title": "" }, { "docid": "be8eb6c72936af75c1e41f9e17ba2579", "text": "The use of unmanned aerial vehicles (UAVs) is growing rapidly across many civil application domains including realtime monitoring, providing wireless coverage, remote sensing, search and rescue, delivery of goods, security and surveillance, precision agriculture, and civil infrastructure inspection. Smart UAVs are the next big revolution in UAV technology promising to provide new opportunities in different applications, especially in civil infrastructure in terms of reduced risks and lower cost. Civil infrastructure is expected to dominate the more that $45 Billion market value of UAV usage. In this survey, we present UAV civil applications and their challenges. We also discuss current research trends and provide future insights for potential UAV uses. Furthermore, we present the key challenges for UAV civil applications, including: charging challenges, collision avoidance and swarming challenges, and networking and security related challenges. Based on our review of the recent literature, we discuss open research challenges and draw high-level insights on how these challenges might be approached.", "title": "" } ]
scidocsrr
379bc9f0d7e44547dd6a08eb885ccc15
Anomaly Detection in Wireless Sensor Networks in a Non-Stationary Environment
[ { "docid": "60fe7f27cd6312c986b679abce3fdea7", "text": "In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting \"several experts\" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise", "title": "" }, { "docid": "3be38e070678e358e23cb81432033062", "text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system", "title": "" } ]
[ { "docid": "2fa6f761f22e0484a84f83e5772bef40", "text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.", "title": "" }, { "docid": "ba0dce539f33496dedac000b61efa971", "text": "The webpage aesthetics is one of the factors that affect the way people are attracted to a site. But two questions emerge: how can we improve a webpage's aesthetics and how can we evaluate this item? In order to solve this problem, we identified some of the theory that is underlying graphic design, gestalt theory and multimedia design. Based in the literature review, we proposed principles for web site design. We also propose a tool to evaluate web design.", "title": "" }, { "docid": "e726e11f855515017de77508b79d3308", "text": "OBJECTIVES\nThis study was conducted to better understand the characteristics of chronic pain patients seeking treatment with medicinal cannabis (MC).\n\n\nDESIGN\nRetrospective chart reviews of 139 patients (87 males, median age 47 years; 52 females, median age 48 years); all were legally qualified for MC use in Washington State.\n\n\nSETTING\nRegional pain clinic staffed by university faculty.\n\n\nPARTICIPANTS\n\n\n\nINCLUSION CRITERIA\nage 18 years and older; having legally accessed MC treatment, with valid documentation in their medical records. All data were de-identified.\n\n\nMAIN OUTCOME MEASURES\nRecords were scored for multiple indicators, including time since initial MC authorization, qualifying condition(s), McGill Pain score, functional status, use of other analgesic modalities, including opioids, and patterns of use over time.\n\n\nRESULTS\nOf 139 patients, 15 (11 percent) had prior authorizations for MC before seeking care in this clinic. The sample contained 236.4 patient-years of authorized MC use. Time of authorized use ranged from 11 days to 8.31 years (median of 1.12 years). Most patients were male (63 percent) yet female patients averaged 0.18 years longer authorized use. There were no other gender-specific trends or factors. Most patients (n = 123, 88 percent) had more than one pain syndrome present. Myofascial pain syndrome was the most common diagnosis (n = 114, 82 percent), followed by neuropathic pain (n = 89, 64 percent), discogenic back pain (n = 72, 51.7 percent), and osteoarthritis (n = 37, 26.6 percent). Other diagnoses included diabetic neuropathy, central pain syndrome, phantom pain, spinal cord injury, fibromyalgia, rheumatoid arthritis, HIV neuropathy, visceral pain, and malignant pain. In 51 (37 percent) patients, there were documented instances of major hurdles related to accessing MC, including prior physicians unwilling to authorize use, legal problems related to MC use, and difficulties in finding an affordable and consistent supply of MC.\n\n\nCONCLUSIONS\nData indicate that males and females access MC at approximately the same rate, with similar median authorization times. Although the majority of patient records documented significant symptom alleviation with MC, major treatment access and delivery barriers remain.", "title": "" }, { "docid": "b6dcf2064ad7f06fd1672b1348d92737", "text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.", "title": "" }, { "docid": "d47143c38598cf88eeb8be654f8a7a00", "text": "Long Short-Term Memory (LSTM) networks have yielded excellent results on handwriting recognition. This paper describes an application of bidirectional LSTM networks to the problem of machine-printed Latin and Fraktur recognition. Latin and Fraktur recognition differs significantly from handwriting recognition in both the statistical properties of the data, as well as in the required, much higher levels of accuracy. Applications of LSTM networks to handwriting recognition use two-dimensional recurrent networks, since the exact position and baseline of handwritten characters is variable. In contrast, for printed OCR, we used a one-dimensional recurrent network combined with a novel algorithm for baseline and x-height normalization. A number of databases were used for training and testing, including the UW3 database, artificially generated and degraded Fraktur text and scanned pages from a book digitization project. The LSTM architecture achieved 0.6% character-level test-set error on English text. When the artificially degraded Fraktur data set is divided into training and test sets, the system achieves an error rate of 1.64%. On specific books printed in Fraktur (not part of the training set), the system achieves error rates of 0.15% (Fontane) and 1.47% (Ersch-Gruber). These recognition accuracies were found without using any language modelling or any other post-processing techniques.", "title": "" }, { "docid": "0b0273a1e2aeb98eb4115113c8957fd2", "text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.", "title": "" }, { "docid": "affa4a43b68f8c158090df3a368fe6b6", "text": "The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.", "title": "" }, { "docid": "49f96e96623502ffe6053cab43054edf", "text": "Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.", "title": "" }, { "docid": "21ad29105c4b6772b05156afd33ac145", "text": "High resolution Digital Surface Models (DSMs) produced from airborne laser-scanning or stereo satellite images provide a very useful source of information for automated 3D building reconstruction. In this paper an investigation is reported about extraction of 3D building models from high resolution DSMs and orthorectified images produced from Worldview-2 stereo satellite imagery. The focus is on the generation of 3D models of parametric building roofs, which is the basis for creating Level Of Detail 2 (LOD2) according to the CityGML standard. In particular the building blocks containing several connected buildings with tilted roofs are investigated and the potentials and limitations of the modeling approach are discussed. The edge information extracted from orthorectified image has been employed as additional source of information in 3D reconstruction algorithm. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines for parametric building reconstruction. The 3D model is derived for each building part, and finally, a complete parametric model is formed by merging the 3D models of the individual building parts and adjusting the nodes after the merging step. For the remaining building parts that do not contain ridge lines, a prismatic model using polygon approximation of the corresponding boundary pixels is derived and merged to the parametric models to shape the final model of the building. A qualitative and quantitative assessment of the proposed method for the automatic reconstruction of buildings with parametric roofs is then provided by comparing the final model with the existing surface model as well as some field measurements. Remote Sens. 2013, 5 1682", "title": "" }, { "docid": "c89ce1ded524ff65c1ebd3d20be155bc", "text": "Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their predictive efficacies for violence. The effect sizes were extracted from 28 original reports published between 1999 and 2008, which assessed the predictive accuracy of more than one tool. We used a within-subject design to improve statistical power and multilevel regression models to disentangle random effects of variation between studies and tools and to adjust for study features. All 9 tools and their subscales predicted violence at about the same moderate level of predictive efficacy with the exception of Psychopathy Checklist--Revised (PCL-R) Factor 1, which predicted violence only at chance level among men. Approximately 25% of the total variance was due to differences between tools, whereas approximately 85% of heterogeneity between studies was explained by methodological features (age, length of follow-up, different types of violent outcome, sex, and sex-related interactions). Sex-differentiated efficacy was found for a small number of the tools. If the intention is only to predict future violence, then the 9 tools are essentially interchangeable; the selection of which tool to use in practice should depend on what other functions the tool can perform rather than on its efficacy in predicting violence. The moderate level of predictive accuracy of these tools suggests that they should not be used solely for some criminal justice decision making that requires a very high level of accuracy such as preventive detention.", "title": "" }, { "docid": "16741aac03ea1a864ddab65c8c73eb7c", "text": "This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid \"CMOL\" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of \"tiles\". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance.", "title": "" }, { "docid": "cffce89fbb97dc1d2eb31a060a335d3c", "text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.", "title": "" }, { "docid": "8c853251e0fb408c829e6f99a581d4cf", "text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.", "title": "" }, { "docid": "fb89a5aa87f1458177d6a32ef25fdf3b", "text": "The increase in population, the rapid economic growth and the rise in community living standards accelerate municipal solid waste (MSW) generation in developing cities. This problem is especially serious in Pudong New Area, Shanghai, China. The daily amount of MSW generated in Pudong was about 1.11 kg per person in 2006. According to the current population growth trend, the solid waste quantity generated will continue to increase with the city's development. In this paper, we describe a waste generation and composition analysis and provide a comprehensive review of municipal solid waste management (MSWM) in Pudong. Some of the important aspects of waste management, such as the current status of waste collection, transport and disposal in Pudong, will be illustrated. Also, the current situation will be evaluated, and its problems will be identified.", "title": "" }, { "docid": "bcd16100ca6814503e876f9f15b8c7fb", "text": "OBJECTIVE\nBrain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG.\n\n\nMETHODS\nA total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters.\n\n\nRESULTS\nThe classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard.\n\n\nCONCLUSIONS\nThis is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.", "title": "" }, { "docid": "8e324cf4900431593d9ebc73e7809b23", "text": "Even though there is a plethora of studies investigating the challenges of adopting ebanking services, a search through the literature indicates that prior studies have investigated either user adoption challenges or the bank implementation challenges. This study integrated both perspectives to provide a broader conceptual framework for investigating challenges banks face in marketing e-banking services in developing country such as Ghana. The results from the mixed method study indicated that institutional–based challenges as well as userbased challenges affect the marketing of e-banking products in Ghana. The strategic implications of the findings for marketing ebanking services are discussed to guide managers to implement e-banking services in Ghana.", "title": "" }, { "docid": "62166980f94bba5e75c9c6ad4a4348f1", "text": "In this paper the design and the implementation of a linear, non-uniform antenna array for a 77-GHz MIMO FMCW system that allows for the estimation of both the distance and the angular position of a target are presented. The goal is to achieve a good trade-off between the main beam width and the side lobe level. The non-uniform spacing in addition with the MIMO principle offers a superior performance compared to a classical uniform half-wavelength antenna array with an equal number of elements. However the design becomes more complicated and can not be tackled using analytical methods. Starting with elementary array factor considerations the design is approached using brute force, stepwise brute force, and particle swarm optimization. The particle swarm optimized array was also implemented. Simulation results and measurements are presented and discussed.", "title": "" }, { "docid": "eba25ae59603328f3ef84c0994d46472", "text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.", "title": "" }, { "docid": "13974867d98411b6a999374afcc5b2cb", "text": "Current best local descriptors are learned on a large dataset of matching and non-matching keypoint pairs. However, data of this kind is not always available since detailed keypoint correspondences can be hard to establish. On the other hand, we can often obtain labels for pairs of keypoint bags. For example, keypoint bags extracted from two images of the same object under different views form a matching pair, and keypoint bags extracted from images of different objects form a non-matching pair. On average, matching pairs should contain more corresponding keypoints than non-matching pairs. We describe an end-to-end differentiable architecture that enables the learning of local keypoint descriptors from such weakly-labeled data.", "title": "" }, { "docid": "bc7f80192416aa7787657aed1bda3997", "text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.", "title": "" } ]
scidocsrr
8f37b402bb1ac9b58883707aee4a2b5c
RELIABILITY-BASED MANAGEMENT OF BURIED PIPELINES
[ { "docid": "150e7a6f46e93fc917e43e32dedd9424", "text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.", "title": "" } ]
[ { "docid": "8abd03202f496de4bec6270946d53a9c", "text": "In this paper, we use time-series modeling to forecast taxi travel demand, in the context of a mobile application-based taxi hailing service. In particular, we model the passenger demand density at various locations in the city of Bengaluru, India. Using the data, we first shortlist time-series models that suit our application. We then analyse the performance of these models by using Mean Absolute Percentage Error (MAPE) as the performance metric. In order to improve the model performance, we employ a multi-level clustering technique where we aggregate demand over neighboring cells/geohashes. We observe that the improved model based on clustering leads to a forecast accuracy of 80% per km2. In addition, our technique obtains an accuracy of 89% per km2 for the most frequently occurring use case.", "title": "" }, { "docid": "80e9f9261397cb378920a6c897fd352a", "text": "Purpose: This study develops a comprehensive research model that can explain potential customers’ behavioral intentions to adopt and use smart home services. Methodology: This study proposes and validates a new theoretical model that extends the theory of planned behavior (TPB). Partial least squares analysis (PLS) is employed to test the research model and corresponding hypotheses on data collected from 216 survey samples. Findings: Mobility, security/privacy risk, and trust in the service provider are important factors affecting the adoption of smart home services. Practical implications: To increase potential users’ adoption rate, service providers should focus on developing mobility-related services that enable people to access smart home services while on the move using mobile devices via control and monitoring functions. Originality/Value: This study is the first empirical attempt to examine user acceptance of smart home services, as most of the prior literature has concerned technical features.", "title": "" }, { "docid": "7bd440a6c7aece364877dbb5170cfcfb", "text": "Semantic representation lies at the core of several applications in Natural Language Processing. However, most existing semantic representation techniques cannot be used effectively for the representation of individual word senses. We put forward a novel multilingual concept representation, called MUFFIN, which not only enables accurate representation of word senses in different languages, but also provides multiple advantages over existing approaches. MUFFIN represents a given concept in a unified semantic space irrespective of the language of interest, enabling cross-lingual comparison of different concepts. We evaluate our approach in two different evaluation benchmarks, semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several standard datasets.", "title": "" }, { "docid": "29e56287071ca1fc1bf3d83f67b3ce8d", "text": "In this paper, we seek to identify factors that might increase the likelihood of adoption and continued use of cyberinfrastructure by scientists. To do so, we review the main research on Information and Communications Technology (ICT) adoption and use by addressing research problems, theories and models used, findings, and limitations. We focus particularly on the individual user perspective. We categorize previous studies into two groups: Adoption research and post-adoption (continued use) research. In addition, we review studies specifically regarding cyberinfrastructure adoption and use by scientists and other special user groups. We identify the limitations of previous theories, models and research findings appearing in the literature related to our current interest in scientists’ adoption and continued use of cyber-infrastructure. We synthesize the previous theories and models used for ICT adoption and use, and then we develop a theoretical framework for studying scientists’ adoption and use of cyber-infrastructure. We also proposed a research design based on the research model developed. Implications for researchers and practitioners are provided.", "title": "" }, { "docid": "da9ffb00398f6aad726c247e3d1f2450", "text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.", "title": "" }, { "docid": "59e02bc986876edc0ee0a97fd4d12a28", "text": "CONTEXT\nSocial anxiety disorder is thought to involve emotional hyperreactivity, cognitive distortions, and ineffective emotion regulation. While the neural bases of emotional reactivity to social stimuli have been described, the neural bases of emotional reactivity and cognitive regulation during social and physical threat, and their relationship to social anxiety symptom severity, have yet to be investigated.\n\n\nOBJECTIVE\nTo investigate behavioral and neural correlates of emotional reactivity and cognitive regulation in patients and controls during processing of social and physical threat stimuli.\n\n\nDESIGN\nParticipants were trained to implement cognitive-linguistic regulation of emotional reactivity induced by social (harsh facial expressions) and physical (violent scenes) threat while undergoing functional magnetic resonance imaging and providing behavioral ratings of negative emotion experience.\n\n\nSETTING\nAcademic psychology department.\n\n\nPARTICIPANTS\nFifteen adults with social anxiety disorder and 17 demographically matched healthy controls.\n\n\nMAIN OUTCOME MEASURES\nBlood oxygen level-dependent signal and negative emotion ratings.\n\n\nRESULTS\nBehaviorally, patients reported greater negative emotion than controls during social and physical threat but showed equivalent reduction in negative emotion following cognitive regulation. Neurally, viewing social threat resulted in greater emotion-related neural responses in patients than controls, with social anxiety symptom severity related to activity in a network of emotion- and attention-processing regions in patients only. Viewing physical threat produced no between-group differences. Regulation during social threat resulted in greater cognitive and attention regulation-related brain activation in controls compared with patients. Regulation during physical threat produced greater cognitive control-related response (ie, right dorsolateral prefrontal cortex) in patients compared with controls.\n\n\nCONCLUSIONS\nCompared with controls, patients demonstrated exaggerated negative emotion reactivity and reduced cognitive regulation-related neural activation, specifically for social threat stimuli. These findings help to elucidate potential neural mechanisms of emotion regulation that might serve as biomarkers for interventions for social anxiety disorder.", "title": "" }, { "docid": "b13c9597f8de229fb7fec3e23c0694d1", "text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.", "title": "" }, { "docid": "dc33d2edcfb124af607bcb817589f6e9", "text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.", "title": "" }, { "docid": "a4e6b629ec4b0fdf8784ba5be1a62260", "text": "Today's real-world databases typically contain millions of items with many thousands of fields. As a result, traditional distribution-based outlier detection techniques have more and more restricted capabilities and novel k-nearest neighbors based approaches have become more and more popular. However, the problems with these k-nearest neighbors rankings for top n outliers, are very computationally expensive for large datasets, and doubts exist in general whether they would work well for high dimensional datasets. To partially circumvent these problems, we propose in this paper a new global outlier factor and a new local outlier factor and an efficient outlier detection algorithm developed upon them that is easy to implement and can provide competing performances with existing solutions. Experiments performed on both synthetic and real data sets demonstrate the efficacy of our method. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "494d720d5a8c7c58b795c5c6131fa8d1", "text": "The increasing emergence of pervasive information systems requires a clearer understanding of the underlying characteristics in relation to user acceptance. Based on the integration of UTAUT2 and three pervasiveness constructs, we derived a comprehensive research model to account for pervasive information systems. Data collected from 346 participants in an online survey was analyzed to test the developed model using structural equation modeling and taking into account multigroup analysis. The results confirm the applicability of the integrated UTAUT2 model to measure pervasiveness. Implications for research and practice are discussed together with future research opportunities.", "title": "" }, { "docid": "d94d49cde6878e0841c1654090062559", "text": "In previous work we described a method for compactly representing graphs with small separators, which makes use of small separators, and presented preliminary experimental results. In this paper we extend the experimental results in several ways, including extensions for dynamic insertion and deletion of edges, a comparison of a variety of coding schemes, and an implementation of two applications using the representation. The results show that the representation is quite effective for a wide variety of real-world graphs, including graphs from finite-element meshes, circuits, street maps, router connectivity, and web links. In addition to significantly reducing the memory requirements, our implementation of the representation is faster than standard representations for queries. The byte codes we introduce lead to DFT times that are a factor of 2.5 faster than our previous results with gamma codes and a factor of between 1 and 9 faster than adjacency lists, while using a factor of between 3 and 6 less space.", "title": "" }, { "docid": "0e45e57b4e799ebf7e8b55feded7e9e1", "text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.", "title": "" }, { "docid": "0218c583a8658a960085ddf813f38dbf", "text": "The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.", "title": "" }, { "docid": "1b5fc0a7b39bedcac9bdc52584fb8a22", "text": "Neem (Azadirachta indica) is a medicinal plant of containing diverse chemical active substances of several biological properties. So, the aim of the current investigation was to assess the effects of water leaf extract of neem plant on the survival and healthy status of Nile tilapia (Oreochromis niloticus), African cat fish (Clarias gariepinus) and zooplankton community. The laboratory determinations of lethal concentrations (LC 100 and LC50) through a static bioassay test were performed. The 24 h LC100 of neem leaf extract was estimated as 4 and 11 g/l, for juvenile's O. niloticus and C. gariepinus, respectively, while, the 96-h LC50 was 1.8 and 4 g/l, respectively. On the other hand, the 24 h LC100 for cladocera and copepoda were 0.25 and 0.45 g/l, respectively, while, the 96-h LC50 was 0.1 and 0.2 g/l, respectively. At the highest test concentrations, adverse effects were obvious with significant reductions in several cladoceran and copepod species. Some alterations in glucose levels, total protein, albumin, globulin as well as AST and ALT in plasma of treated O. niloticus and C. gariepinus with /2 and /10 LC50 of neem leaf water extract compared with non-treated one after 2 and 7 days of exposure were recorded and discussed. It could be concluded that the application of neem leaf extract can be used to control unwanted organisms in ponds as environment friendly material instead of deleterious pesticides. Also, extensive investigations should be established for the suitable methods of application in aquatic animal production facilities to be fully explored in future.", "title": "" }, { "docid": "cd4e2e3af17cd84d4ede35807e71e783", "text": "A proposal for saliency computation within the visual cortex is put forth based on the premise that localized saliency computation serves to maximize information sampled from one's environment. The model is built entirely on computational constraints but nevertheless results in an architecture with cells and connectivity reminiscent of that appearing in the visual cortex. It is demonstrated that a variety of visual search behaviors appear as emergent properties of the model and therefore basic principles of coding and information transmission. Experimental results demonstrate greater efficacy in predicting fixation patterns across two different data sets as compared with competing models.", "title": "" }, { "docid": "f73cd33c8dfc9791558b239aede6235b", "text": "Web clustering engines organize search results by topic, thus offering a complementary view to the flat-ranked list returned by conventional search engines. In this survey, we discuss the issues that must be addressed in the development of a Web clustering engine, including acquisition and preprocessing of search results, their clustering and visualization. Search results clustering, the core of the system, has specific requirements that cannot be addressed by classical clustering algorithms. We emphasize the role played by the quality of the cluster labels as opposed to optimizing only the clustering structure. We highlight the main characteristics of a number of existing Web clustering engines and also discuss how to evaluate their retrieval performance. Some directions for future research are finally presented.", "title": "" }, { "docid": "4dba2a9a29f58b55a6b2c3101acf2437", "text": "Clinical and neurobiological findings have reported the involvement of endocannabinoid signaling in the pathophysiology of schizophrenia. This system modulates dopaminergic and glutamatergic neurotransmission that is associated with positive, negative, and cognitive symptoms of schizophrenia. Despite neurotransmitter impairments, increasing evidence points to a role of glial cells in schizophrenia pathobiology. Glial cells encompass three main groups: oligodendrocytes, microglia, and astrocytes. These cells promote several neurobiological functions, such as myelination of axons, metabolic and structural support, and immune response in the central nervous system. Impairments in glial cells lead to disruptions in communication and in the homeostasis of neurons that play role in pathobiology of disorders such as schizophrenia. Therefore, data suggest that glial cells may be a potential pharmacological tool to treat schizophrenia and other brain disorders. In this regard, glial cells express cannabinoid receptors and synthesize endocannabinoids, and cannabinoid drugs affect some functions of these cells that can be implicated in schizophrenia pathobiology. Thus, the aim of this review is to provide data about the glial changes observed in schizophrenia, and how cannabinoids could modulate these alterations.", "title": "" }, { "docid": "e2807120a8a04a9c5f5f221e413aec4d", "text": "Background A military aircraft in a hostile environment may need to use radar jamming in order to avoid being detected or engaged by the enemy. Effective jamming can require knowledge of the number and type of enemy radars; however, the radar receiver on the aircraft will observe a single stream of pulses from all radar emitters combined. It is advantageous to separate this collection of pulses into individual streams each corresponding to a particular emitter in the environment; this process is known as pulse deinterleaving. Pulse deinterleaving is critical for effective electronic warfare (EW) signal processing such as electronic attack (EA) and electronic protection (EP) because it not only aids in the identification of enemy radars but also permits the intelligent allocation of processing resources.", "title": "" }, { "docid": "6a470404c36867a18a98fafa9df6848f", "text": "Memory links use variable-impedance drivers, feed-forward equalization (FFE) [1], on-die termination (ODT) and slew-rate control to optimize the signal integrity (SI). An asymmetric DRAM link configuration exploits the availability of a fast CMOS technology on the memory controller side to implement powerful equalization, while keeping the circuit complexity on the DRAM side relatively simple. This paper proposes the use of Tomlinson Harashima precoding (THP) [2-4] in a memory controller as replacement of the afore-mentioned SI optimization techniques. THP is a transmitter equalization technique in which post-cursor inter-symbol interference (ISI) is cancelled by means of an infinite impulse response (IIR) filter with modulo-based amplitude limitation; similar to a decision feedback equalizer (DFE) on the receive side. However, in contrast to a DFE, THP does not suffer from error propagation.", "title": "" }, { "docid": "570e48e839bd2250473d4332adf2b53f", "text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.", "title": "" } ]
scidocsrr
38a5087591e786f4da8b636d631b6e8b
Evaluation of intrabony defects treated with platelet-rich fibrin or autogenous bone graft: A comparative analysis
[ { "docid": "e1adfaf4af1e4fb5d0101a157039ccfe", "text": "Platelet-rich fibrin (PRF) belongs to a new generation of platelet concentrates, with simplified processing and without biochemical blood handling. In this second article, we investigate the platelet-associated features of this biomaterial. During PRF processing by centrifugation, platelets are activated and their massive degranulation implies a very significant cytokine release. Concentrated platelet-rich plasma platelet cytokines have already been quantified in many technologic configurations. To carry out a comparative study, we therefore undertook to quantify PDGF-BB, TGFbeta-1, and IGF-I within PPP (platelet-poor plasma) supernatant and PRF clot exudate serum. These initial analyses revealed that slow fibrin polymerization during PRF processing leads to the intrinsic incorporation of platelet cytokines and glycanic chains in the fibrin meshes. This result would imply that PRF, unlike the other platelet concentrates, would be able to progressively release cytokines during fibrin matrix remodeling; such a mechanism might explain the clinically observed healing properties of PRF.", "title": "" } ]
[ { "docid": "7f553d57ec54b210e86e4d7abba160d7", "text": "SUMMARY\nBioIE is a rule-based system that extracts informative sentences relating to protein families, their structures, functions and diseases from the biomedical literaturE. Based on manual definition of templates and rules, it aims at precise sentence extraction rather than wide recall. After uploading source text or retrieving abstracts from MEDLINE, users can extract sentences based on predefined or user-defined template categories. BioIE also provides a brief insight into the syntactic and semantic context of the source-text by looking at word, N-gram and MeSH-term distributions. Important Applications of BioIE are in, for example, annotation of microarray data and of protein databases.\n\n\nAVAILABILITY\nhttp://umber.sbs.man.ac.uk/dbbrowser/bioie/", "title": "" }, { "docid": "381e7083535bb5f15cdece7df4e986e3", "text": "We present a hybrid deep learning method for modelling the uncertainty of camera relocalization from a single RGB image. The proposed system leverages the discriminative deep image representation from a convolutional neural networks, and uses Gaussian Process regressors to generate the probability distribution of the six degree of freedom (6DoF) camera pose in an end-to-end fashion. This results in a network that can generate uncertainties over its inferences with no need to sample many times. Furthermore we show that our objective based on KL divergence reduces the dependence on the choice of hyperparameters. The results show that compared to the state-of-the-art Bayesian camera relocalization method, our model produces comparable localization uncertainty and improves the system efficiency significantly, without loss of accuracy.", "title": "" }, { "docid": "d1eb2bf9d265017450a8a891540afa30", "text": "Air-gapped networks are isolated, separated both logically and physically from public networks. Although the feasibility of invading such systems has been demonstrated in recent years, exfiltration of data from air-gapped networks is still a challenging task. In this paper we present GSMem, a malware that can exfiltrate data through an air-gap over cellular frequencies. Rogue software on an infected target computer modulates and transmits electromagnetic signals at cellular frequencies by invoking specific memory-related instructions and utilizing the multichannel memory architecture to amplify the transmission. Furthermore, we show that the transmitted signals can be received and demodulated by a rootkit placed in the baseband firmware of a nearby cellular phone. We present crucial design issues such as signal generation and reception, data modulation, and transmission detection. We implement a prototype of GSMem consisting of a transmitter and a receiver and evaluate its performance and limitations. Our current results demonstrate its efficacy and feasibility, achieving an effective transmission distance of 1 5.5 meters with a standard mobile phone. When using a dedicated, yet affordable hardware receiver, the effective distance reached over 30 meters.", "title": "" }, { "docid": "11ca0df1121fc8a8e0ebaec58ea08a87", "text": "In real video surveillance scenarios, visual pedestrian attributes, such as gender, backpack, clothes types, are very important for pedestrian retrieval and person reidentification. Existing methods for attributes recognition have two drawbacks: (a) handcrafted features (e.g. color histograms, local binary patterns) cannot cope well with the difficulty of real video surveillance scenarios; (b) the relationship among pedestrian attributes is ignored. To address the two drawbacks, we propose two deep learning based models to recognize pedestrian attributes. On the one hand, each attribute is treated as an independent component and the deep learning based single attribute recognition model (DeepSAR) is proposed to recognize each attribute one by one. On the other hand, to exploit the relationship among attributes, the deep learning framework which recognizes multiple attributes jointly (DeepMAR) is proposed. In the DeepMAR, one attribute can contribute to the representation of other attributes. For example, the gender of woman can contribute to the representation oflong hair and wearing skirt. Experiments on recent popular pedestrian attribute datasets illustrate that our proposed models achieve the state-of-the-art results.", "title": "" }, { "docid": "f4a0738d814e540f7c208ab1e3666fb7", "text": "In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic. Preprint for the 31st International Conference on Machine Learning (ICML 2014) 1 ar X iv :1 31 1. 48 25 v3 [ st at .M L ] 8 J un 2 01 5 Erratum After the publication of our article, we found an error in the proof of Lemma 1 which invalidates the main theorem. It appears that the information given to the algorithm is not sufficient for the main theorem to hold true. The theoretical guarantees would remain valid in a setting where the algorithm observes the instantaneous regret instead of noisy samples of the unknown function. We describe in this page the mistake and its consequences. Let f : X → R be the unknown function to be optimized, which is a sample from a Gaussian process. Let’s fix x, x1, . . . , xT ∈ X and the observations yt = f(xt)+ t where the noise variables t are independent Gaussian noise N (0, σ). We define the instantaneous regret rt = f(x?)− f(xt) and, MT = T ∑", "title": "" }, { "docid": "71f8aca9d325f015836033c2a46adaa6", "text": "BACKGROUND\nTwenty states currently require that women seeking abortion be counseled on possible psychological responses, with six states stressing negative responses. The majority of research finds that women whose unwanted pregnancies end in abortion do not subsequently have adverse mental health outcomes; scant research examines this relationship for young women.\n\n\nMETHODS\nFour waves of data from the National Longitudinal Study of Adolescent Health were analyzed. Population-averaged lagged logistic and linear regression models were employed to test the relationship between pregnancy resolution outcome and subsequent depressive symptoms, adjusting for prior depressive symptoms, history of traumatic experiences, and sociodemographic covariates. Depressive symptoms were measured using a nine-item version of the Center for Epidemiologic Studies Depression scale. Analyses were conducted among two subsamples of women whose unwanted first pregnancies were resolved in either abortion or live birth: (1) 856 women with an unwanted first pregnancy between Waves 2 and 3; and (2) 438 women with an unwanted first pregnancy between Waves 3 and 4 (unweighted n's).\n\n\nRESULTS\nIn unadjusted and adjusted linear and logistic regression analyses for both subsamples, there was no association between having an abortion after an unwanted first pregnancy and subsequent depressive symptoms. In fully adjusted models, the most recent measure of prior depressive symptoms was consistently associated with subsequent depressive symptoms.\n\n\nCONCLUSIONS\nIn a nationally representative, longitudinal dataset, there was no evidence that young women who had abortions were at increased risk of subsequent depressive symptoms compared with those who give birth after an unwanted first pregnancy.", "title": "" }, { "docid": "06b0708250515510b8a3fc302045fe4b", "text": "While the subject of cyberbullying of children and adolescents has begun to be addressed, less attention and research have focused on cyberbullying in the workplace. Male-dominated workplaces such as manufacturing settings are found to have an increased risk of workplace bullying, but the prevalence of cyberbullying in this sector is not known. This exploratory study investigated the prevalence and methods of face-to-face bullying and cyberbullying of males at work. One hundred three surveys (a modified version of the revised Negative Acts Questionnaire [NAQ-R]) were returned from randomly selected members of the Australian Manufacturing Workers' Union (AMWU). The results showed that 34% of respondents were bullied face-to-face, and 10.7% were cyberbullied. All victims of cyberbullying also experienced face-to-face bullying. The implications for organizations' \"duty of care\" in regard to this new form of bullying are indicated.", "title": "" }, { "docid": "3ff01763def34800cf8afb9fc5fa9c83", "text": "The emerging machine learning technique called support vector machines is proposed as a method for performing nonlinear equalization in communication systems. The support vector machine has the advantage that a smaller number of parameters for the model can be identified in a manner that does not require the extent of prior information or heuristic assumptions that some previous techniques require. Furthermore, the optimization method of a support vector machine is quadratic programming, which is a well-studied and understood mathematical programming technique. Support vector machine simulations are carried out on nonlinear problems previously studied by other researchers using neural networks. This allows initial comparison against other techniques to determine the feasibility of using the proposed method for nonlinear detection. Results show that support vector machines perform as well as neural networks on the nonlinear problems investigated. A method is then proposed to introduce decision feedback processing to support vector machines to address the fact that intersymbol interference (ISI) data generates input vectors having temporal correlation, whereas a standard support vector machine assumes independent input vectors. Presenting the problem from the viewpoint of the pattern space illustrates the utility of a bank of support vector machines. This approach yields a nonlinear processing method that is somewhat different than the nonlinear decision feedback method whereby the linear feedback filter of the decision feedback equalizer is replaced by a Volterra filter. A simulation using a linear system shows that the proposed method performs equally to a conventional decision feedback equalizer for this problem.", "title": "" }, { "docid": "a3dc6a178b7861959b992387366c2c78", "text": "Linked data and semantic web technologies are gaining impact and importance in the Architecture, Engineering, Construction and Facility Management (AEC/FM) industry. Whereas we have seen a strong technological shift with the emergence of Building Information Modeling (BIM) tools, this second technological shift to the exchange and management of building data over the web might be even stronger than the first one. In order to make this a success, the AEC/FM industry will need strong and appropriate ontologies, as they will allow industry practitioners to structure their data in a commonly agreed format and exchange the data. Herein, we look at the ontologies that are emerging in the area of Building Automation and Control Systems (BACS). We propose a BACS ontology in strong alignment with existing ontologies and evaluate how it can be used for capturing automation and control systems of a building by modeling a use case.", "title": "" }, { "docid": "5e0ac4a3957f5eba26790f54678df7fc", "text": "Recent statistics show that in 2015 more than 140 millions new malware samples have been found. Among these, a large portion is due to ransomware, the class of malware whose specific goal is to render the victim’s system unusable, in particular by encrypting important files, and then ask the user to pay a ransom to revert the damage. Several ransomware include sophisticated packing techniques, and are hence difficult to statically analyse. We present EldeRan, a machine learning approach for dynamically analysing and classifying ransomware. EldeRan monitors a set of actions performed by applications in their first phases of installation checking for characteristics signs of ransomware. Our tests over a dataset of 582 ransomware belonging to 11 families, and with 942 goodware applications, show that EldeRan achieves an area under the ROC curve of 0.995. Furthermore, EldeRan works without requiring that an entire ransomware family is available beforehand. These results suggest that dynamic analysis can support ransomware detection, since ransomware samples exhibit a set of characteristic features at run-time that are common across families, and that helps the early detection of new variants. We also outline some limitations of dynamic analysis for ransomware and propose possible solutions.", "title": "" }, { "docid": "ad903f1d8998200d89234f0244452ad4", "text": "Within last two decades, social media has emerged as almost an alternate world where people communicate with each other and express opinions about almost anything. This makes platforms like Facebook, Reddit, Twitter, Myspace etc. a rich bank of heterogeneous data, primarily expressed via text but reflecting all textual and non-textual data that human interaction can produce. We propose a novel attention based hierarchical LSTM model to classify discourse act sequences in social media conversations, aimed at mining data from online discussion using textual meanings beyond sentence level. The very uniqueness of the task is the complete categorization of possible pragmatic roles in informal textual discussions, contrary to extraction of question-answers, stance detection or sarcasm identification which are very much role specific tasks. Early attempt was made on a Reddit discussion dataset. We train our model on the same data, and present test results on two different datasets, one from Reddit and one from Facebook. Our proposed model outperformed the previous one in terms of domain independence; without using platformdependent structural features, our hierarchical LSTM with word relevance attention mechanism achieved F1-scores of 71% and 66% respectively to predict discourse roles of comments in Reddit and Facebook discussions. Efficiency of recurrent and convolutional architectures in order to learn discursive representation on the same task has been presented and analyzed, with different word and comment embedding schemes. Our attention mechanism enables us to inquire into relevance ordering of text segments according to their roles in discourse. We present a human annotator experiment to unveil important observations about modeling and data annotation. Equipped with our text-based discourse identification model, we inquire into how Subhabrata Dutta Jadavpur University, Kolkata, India, e-mail: subha0009@gmail.com Tanmoy Chakraborty IIIT Delhi, India, e-mail: tanmoy@iiitd.ac.in Dipankar Das Jadavpur University, Kolkata, India, e-mail: dipankar.dipnil2005@gmail.com 1 ar X iv :1 80 8. 02 29 0v 1 [ cs .C L ] 7 A ug 2 01 8 2 Subhabrata Dutta, Tanmoy Chakraborty and Dipankar Das heterogeneous non-textual features like location, time, leaning of information etc. play their roles in charaterizing online discussions on Facebook.", "title": "" }, { "docid": "33468c214408d645651871bd8018ed82", "text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.", "title": "" }, { "docid": "b0c91e6f8d1d6d41693800e1253b414f", "text": "Tightly coupling GNSS pseudorange and Doppler measurements with other sensors is known to increase the accuracy and consistency of positioning information. Nowadays, high-accuracy geo-referenced lane marking maps are seen as key information sources in autonomous vehicle navigation. When an exteroceptive sensor such as a video camera or a lidar is used to detect them, lane markings provide positioning information which can be merged with GNSS data. In this paper, measurements from a forward-looking video camera are merged with raw GNSS pseudoranges and Dopplers on visible satellites. To create a localization system that provides pose estimates with high availability, dead reckoning sensors are also integrated. The data fusion problem is then formulated as sequential filtering. A reduced-order state space modeling of the observation problem is proposed to give a real-time system that is easy to implement. A Kalman filter with measured input and correlated noises is developed using a suitable error model of the GNSS pseudoranges. Our experimental results show that this tightly coupled approach performs better, in terms of accuracy and consistency, than a loosely coupled method using GNSS fixes as inputs.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "024f88a24593455b532f85327d741bea", "text": "Many women suffer from excessive hair growth, often in combination with polycystic ovarian syndrome (PCOS). It is unclear how hirsutism influences such women's experiences of their bodies. Our aim is to describe and interpret women's experiences of their bodies when living with hirsutism. Interviews were conducted with 10 women with hirsutism. We used a qualitative latent content analysis. Four closely intertwined themes were disclosed: the body was experienced as a yoke, a freak, a disgrace, and as a prison. Hirsutism deeply affects women's experiences of their bodies in a negative way.", "title": "" }, { "docid": "37d1b8960dd95dfca5c307727ddfdc6c", "text": "Reasoning about the future is fundamental to intelligence. In this work, I consider the problem of reasoning about the future actions of an intelligent agent. We find the framework of learning sequential policies beneficial, which poses a set of important design decisions. The focus of this work is the exploration of various policy-learning design decisions, and how these design decisions affect the primary task of forecating agent futures. Throughout this work, I use demonstrations of agent behavior and often use rich visual data to drive learning. I developed forecasting approaches to excel in diverse, realistic, single-agent domains. These include sparse models to generalize from few demonstrations of human daily activity, adaptive models to continuously learn from demonstrations of human daily activity, and high-dimensional generative models learned from demonstrations of human driving behavior. I also explored incentivized forecasting, which encourages an artificial agent that only has access to partial observations of state to learn predictive state representations in order to perform a task better. While powerful and useful in these settings, our answers have only been tested in single agent domains. Yet, many realistic scenarios involve multiple agents undertaking complex behaviors: for instance, cars and people navigating and negotiating at intersections. Therefore, I propose to extend our generative framework to multiagent domains as the first direction of future work. This involves generalizing representations and inputs to multiple agents. Second, in the more difficult multiagent setting where we do not have access to expert demonstrations for one of the agents, our learning system should couple its forecasts of other agents with its own behavior. A third direction of future work is extension of our generative model to the online learning setting. Altogether, our answers will serve as a guiding extensible framework for further development of practical learning-based forecasting systems.", "title": "" }, { "docid": "3b1a7539000a8ddabdaa4888b8bb1adc", "text": "This paper presents evaluations among the most usual maximum power point tracking (MPPT) techniques, doing meaningful comparisons with respect to the amount of energy extracted from the photovoltaic (PV) panel [tracking factor (TF)] in relation to the available power, PV voltage ripple, dynamic response, and use of sensors. Using MatLab/Simulink and dSPACE platforms, a digitally controlled boost dc-dc converter was implemented and connected to an Agilent Solar Array E4350B simulator in order to verify the analytical procedures. The main experimental results are presented for conventional MPPT algorithms and improved MPPT algorithms named IC based on proportional-integral (PI) and perturb and observe based on PI. Moreover, the dynamic response and the TF are also evaluated using a user-friendly interface, which is capable of online program power profiles and computes the TF. Finally, a typical daily insulation is used in order to verify the experimental results for the main PV MPPT methods.", "title": "" }, { "docid": "9e3bba7a681a838fb0b32c1e06eaae93", "text": "This review focuses on the synthesis, protection, functionalization, and application of magnetic nanoparticles, as well as the magnetic properties of nanostructured systems. Substantial progress in the size and shape control of magnetic nanoparticles has been made by developing methods such as co-precipitation, thermal decomposition and/or reduction, micelle synthesis, and hydrothermal synthesis. A major challenge still is protection against corrosion, and therefore suitable protection strategies will be emphasized, for example, surfactant/polymer coating, silica coating and carbon coating of magnetic nanoparticles or embedding them in a matrix/support. Properly protected magnetic nanoparticles can be used as building blocks for the fabrication of various functional systems, and their application in catalysis and biotechnology will be briefly reviewed. Finally, some future trends and perspectives in these research areas will be outlined.", "title": "" }, { "docid": "c06e1491b0aabbbd73628c2f9f45d65d", "text": "With the integration of deep learning into the traditional field of reinforcement learning in the recent decades, the spectrum of applications that artificial intelligence caters is currently very broad. As using AI to play games is a traditional application of reinforcement learning, the project’s objective is to implement a deep reinforcement learning agent that can defeat a video game. Since it is often difficult to determine which algorithms are appropriate given the wide selection of state-of-the-art techniques in the discipline, proper comparisons and investigations of the algorithms are a prerequisite to implementing such an agent. As a result, this paper serves as a platform for exploring the possibility and effectiveness of using conventional state-of-the-art reinforcement learning methods for playing Pacman maps. In particular, this paper demonstrates that Combined DQN, a variation of Rainbow DQN, is able to attain high performance in small maps such as 506Pacman, smallGrid and mediumGrid. It was also demonstrated that the trained agents could also play Pacman maps similar to training with limited performance. Nevertheless, the algorithm suffers due to its data inefficiency and lack of human-like features, which may be remedied in the future by introducing more human-like features into the algortihm, such as intrinsic motivation and imagination.", "title": "" }, { "docid": "d516a59094e3197bce709f4414db4517", "text": "Authorship attribution deals with identifying the authors of anonymous texts. Traditionally, research in this field has focused on formal texts, such as essays and novels, but recently more attention has been given to texts generated by on-line users, such as e-mails and blogs. Authorship attribution of such on-line texts is a more challenging task than traditional authorship attribution, because such texts tend to be short, and the number of candidate authors is often larger than in traditional settings. We address this challenge by using topic models to obtain author representations. In addition to exploring novel ways of applying two popular topic models to this task, we test our new model that projects authors and documents to two disjoint topic spaces. Utilizing our model in authorship attribution yields state-of-the-art performance on several data sets, containing either formal texts written by a few authors or informal texts generated by tens to thousands of on-line users. We also present experimental results that demonstrate the applicability of topical author representations to two other problems: inferring the sentiment polarity of texts, and predicting the ratings that users would give to items such as movies.", "title": "" } ]
scidocsrr
a6d40d4cd081b4cddb477be87ee72c24
Impulse noise reduction for texture images using real word spelling correction algorithm and local binary patterns
[ { "docid": "d057eece8018a905fe1642a1f40de594", "text": "6 Abstract— Removal of noise from the original signal is still a bottleneck for researchers. There are several methods and techniques published and each method has its own advantages, disadvantages and assumptions. This paper presents a review of some significant work in the field of Image Denoising.The brief introduction of some popular approaches is provided and discussed. Insights and potential future trends are also discussed", "title": "" } ]
[ { "docid": "d9490d9de24e416f9fb363f153597020", "text": "Exploring contextual information in the local region is important for shape understanding and analysis. Existing studies often employ hand-crafted or explicit ways to encode contextual information of local regions. However, it is hard to capture fine-grained contextual information in hand-crafted or explicit manners, such as the correlation between different areas in a local region, which limits the discriminative ability of learned features. To resolve this issue, we propose a novel deep learning model for 3D point clouds, named Point2Sequence, to learn 3D shape features by capturing fine-grained contextual information in a novel implicit way. Point2Sequence employs a novel sequence learning model for point clouds to capture the correlations by aggregating multi-scale areas of each local region with attention. Specifically, Point2Sequence first learns the feature of each area scale in a local region. Then, it captures the correlation between area scales in the process of aggregating all area scales using a recurrent neural network (RNN) based encoder-decoder structure, where an attention mechanism is proposed to highlight the importance of different area scales. Experimental results show that Point2Sequence achieves state-of-the-art performance in shape classification and segmentation tasks.", "title": "" }, { "docid": "8c3ecd27a695fef2d009bbf627820a0d", "text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.", "title": "" }, { "docid": "688d6f57a4567b7d23a849e33ae584d4", "text": "Whereas traditional theories of gender development have focused on individualistic paths, recent analyses have argued for a more social categorical approach to children's understanding of gender. Using a modeling paradigm based on K. Bussey and A. Bandura (1984), 3 experiments (N = 62, N = 32, and N = 64) examined preschoolers' (M age = 52.9 months) imitation of, and memory for, behaviors of same-sex and opposite-sex children and adults. In all experiments, children's imitation of models varied according to the emphasis given to the particular category of models, despite equal attention being paid to both categories. It is suggested that the categorical nature of gender, or age, informs children's choice of imitative behaviors.", "title": "" }, { "docid": "6c270eaa2b9b9a0e140e0d8879f5d383", "text": "More than 75% of hospital-acquired or nosocomial urinary tract infections are initiated by urinary catheters, which are used during the treatment of 15-25% of hospitalized patients. Among other purposes, urinary catheters are primarily used for draining urine after surgeries and for urinary incontinence. During catheter-associated urinary tract infections, bacteria travel up to the bladder and cause infection. A major cause of catheter-associated urinary tract infection is attributed to the use of non-ideal materials in the fabrication of urinary catheters. Such materials allow for the colonization of microorganisms, leading to bacteriuria and infection, depending on the severity of symptoms. The ideal urinary catheter is made out of materials that are biocompatible, antimicrobial, and antifouling. Although an abundance of research has been conducted over the last forty-five years on the subject, the ideal biomaterial, especially for long-term catheterization of more than a month, has yet to be developed. The aim of this review is to highlight the recent advances (over the past 10years) in developing antimicrobial materials for urinary catheters and to outline future requirements and prospects that guide catheter materials selection and design.\n\n\nSTATEMENT OF SIGNIFICANCE\nThis review article intends to provide an expansive insight into the various antimicrobial agents currently being researched for urinary catheter coatings. According to CDC, approximately 75% of urinary tract infections are caused by urinary catheters and 15-25% of hospitalized patients undergo catheterization. In addition to these alarming statistics, the increasing cost and health related complications associated with catheter associated UTIs make the research for antimicrobial urinary catheter coatings even more pertinent. This review provides a comprehensive summary of the history, the latest progress in development of the coatings and a brief conjecture on what the future entails for each of the antimicrobial agents discussed.", "title": "" }, { "docid": "685b1471c334c941507ac12eb6680872", "text": "Purpose – The concept of ‘‘knowledge’’ is presented in diverse and sometimes even controversial ways in the knowledge management (KM) literature. The aim of this paper is to identify the emerging views of knowledge and to develop a framework to illustrate the interrelationships of the different knowledge types. Design/methodology/approach – This paper is a literature review to explore how ‘‘knowledge’’ as a central concept is presented and understood in a selected range of KM publications (1990-2004). Findings – The exploration of the knowledge landscape showed that ‘‘knowledge’’ is viewed in four emerging and complementary ways. The ontological, epistemological, commodity, and community views of knowledge are discussed in this paper. The findings show that KM is still a young discipline and therefore it is natural to have different, sometimes even contradicting views of ‘‘knowledge’’ side by side in the literature. Practical implications – These emerging views of knowledge could be seen as opportunities for researchers to provide new contributions. However, this diversity and complexity call for careful and specific clarification of the researchers’ standpoint, for a clear statement of their views of knowledge. Originality/value – This paper offers a framework as a compass for researchers to help their orientation in the confusing and ever changing landscape of knowledge.", "title": "" }, { "docid": "81c8745c03bb3019aa5022ecb818ec4f", "text": "The present study examined factors associated with the emergence and cessation of youth cyberbullying and victimization in Taiwan. A total of 2,315 students from 26 high schools were assessed in the 10th grade, with follow-up performed in the 11th grade. Self-administered questionnaires were collected in 2010 and 2011. Multiple logistic regression was conducted to examine the factors. Multivariate analysis results indicated that higher levels of risk factors (online game use, exposure to violence in media, internet risk behaviors, cyber/school bullying experiences) in the 10th grade coupled with an increase in risk factors from grades 10 to 11 could be used to predict the emergence of cyberbullying perpetration/victimization. In contrast, lower levels of risk factors in the 10th grade and higher levels of protective factors coupled with a decrease in risk factors predicted the cessation of cyberbullying perpetration/victimization. Online game use, exposure to violence in media, Internet risk behaviors, and cyber/school bullying experiences can be used to predict the emergence and cessation of youth cyberbullying perpetration and victimization.", "title": "" }, { "docid": "359d76f0b4f758c3a58e886e840c5361", "text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-", "title": "" }, { "docid": "43682c34dee12aed47d87613dd6b1e6c", "text": "Balancing robot is a robot that relies on two wheels in the process of movement. Basically, to be able to remain standing balanced, the control requires an angle value to be used as tilt set-point. That angle value is a balance point of the robot itself which is the robot's center of gravity. Generally, to find the correct balance point, requires manual measurement or through trial and error, depends on the robot's mechanical design. However, when the robot is at balance state and its balance point changes because of the mechanical moving parts or bringing a payload, the robot will move towards the heaviest side and then fall. In this research, a cascade PID control system is developed for balancing robot to keep it balanced without changing the set-point even if the balance point changes. Two parameter is used as feedback for error variable, angle and distance error. When the robot is about to fall, distance taken from the starting position will be calculated and used to correct angle error so that the robot will still balance without changing the set-point but manipulating the control's error value. Based on the research that has been done, payload that can be brought by the robot is up to 350 grams.", "title": "" }, { "docid": "f17a6c34a7b3c6a7bf266f04e819af94", "text": "BACKGROUND\nPatients with advanced squamous-cell non-small-cell lung cancer (NSCLC) who have disease progression during or after first-line chemotherapy have limited treatment options. This randomized, open-label, international, phase 3 study evaluated the efficacy and safety of nivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint-inhibitor antibody, as compared with docetaxel in this patient population.\n\n\nMETHODS\nWe randomly assigned 272 patients to receive nivolumab, at a dose of 3 mg per kilogram of body weight every 2 weeks, or docetaxel, at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival.\n\n\nRESULTS\nThe median overall survival was 9.2 months (95% confidence interval [CI], 7.3 to 13.3) with nivolumab versus 6.0 months (95% CI, 5.1 to 7.3) with docetaxel. The risk of death was 41% lower with nivolumab than with docetaxel (hazard ratio, 0.59; 95% CI, 0.44 to 0.79; P<0.001). At 1 year, the overall survival rate was 42% (95% CI, 34 to 50) with nivolumab versus 24% (95% CI, 17 to 31) with docetaxel. The response rate was 20% with nivolumab versus 9% with docetaxel (P=0.008). The median progression-free survival was 3.5 months with nivolumab versus 2.8 months with docetaxel (hazard ratio for death or disease progression, 0.62; 95% CI, 0.47 to 0.81; P<0.001). The expression of the PD-1 ligand (PD-L1) was neither prognostic nor predictive of benefit. Treatment-related adverse events of grade 3 or 4 were reported in 7% of the patients in the nivolumab group as compared with 55% of those in the docetaxel group.\n\n\nCONCLUSIONS\nAmong patients with advanced, previously treated squamous-cell NSCLC, overall survival, response rate, and progression-free survival were significantly better with nivolumab than with docetaxel, regardless of PD-L1 expression level. (Funded by Bristol-Myers Squibb; CheckMate 017 ClinicalTrials.gov number, NCT01642004.).", "title": "" }, { "docid": "6112a0dc02fde9788730b6e634177475", "text": "Reviews of products or services on Internet marketplace websites contain a rich amount of information. Users often wish to survey reviews or review snippets from the perspective of a certain aspect, which has resulted in a large body of work on aspect identification and extraction from such corpora. In this work, we evaluate a newly-proposed neural model for aspect extraction on two practical tasks. The first is to extract canonical sentences of various aspects from reviews, and is judged by human evaluators against alternatives. A kmeans baseline does remarkably well in this setting. The second experiment focuses on the suitability of the recovered aspect distributions to represent users by the reviews they have written. Through a set of review reranking experiments, we find that aspect-based profiles can largely capture notions of user preferences, by showing that divergent users generate markedly different review rankings.", "title": "" }, { "docid": "c8d56c100db663ba532df4766e458345", "text": "Decomposing sensory measurements into relevant parts is a fundamental prerequisite for solving complex tasks, e.g., in the field of mobile manipulation in domestic environments. In this paper, we present a fast approach to surface reconstruction in range images by means of approximate polygonal meshing. The obtained local surface information and neighborhoods are then used to 1) smooth the underlying measurements, and 2) segment the image into planar regions and other geometric primitives. An evaluation using publicly available data sets shows that our approach does not rank behind state-of-the-art algorithms while allowing to process range images at high frame rates.", "title": "" }, { "docid": "7dfc7588fbf80bd63a31ceace358d351", "text": "Modern virtual agents require knowledge about their environment, the interaction itself, and their interlocutors’ behavior in order to be able to show appropriate nonverbal behavior as well as to adapt dialog policies accordingly. Recent achievements in the area of automatic behavior recognition and understanding can provide information about the interactants’ multimodal nonverbal behavior and subsequently their affective states. In this paper, we introduce a perception markup language (PML) which is a first step towards a standardized representation of perceived nonverbal behaviors. PML follows several design concepts, namely compatibility and synergy, modeling uncertainty, multiple interpretative layers, and extensibility, in order to maximize its usefulness for the research community. We show how we can successfully integrate PML in a fully automated virtual agent system for healthcare applications.", "title": "" }, { "docid": "fd568ae231543517bd660d37c0b71570", "text": "Chemical and electrical interaction within and between cells is well established. Just the opposite is true about cellular interactions via other physical fields. The most probable candidate for an other form of cellular interaction is the electromagnetic field. We review theories and experiments on how cells can generate and detect electromagnetic fields generally, and if the cell-generated electromagnetic field can mediate cellular interactions. We do not limit here ourselves to specialized electro-excitable cells. Rather we describe physical processes that are of a more general nature and probably present in almost every type of living cell. The spectral range included is broad; from kHz to the visible part of the electromagnetic spectrum. We show that there is a rather large number of theories on how cells can generate and detect electromagnetic fields and discuss experimental evidence on electromagnetic cellular interactions in the modern scientific literature. Although small, it is continuously accumulating.", "title": "" }, { "docid": "93ea64131438c3491841599560e798f9", "text": "Synthetic aperture radar (SAR) interferometry (InSAR) is performed using repeat-pass geometry. InSAR technique is used to estimate the topographic reconstruction of the earth surface. The main problem of the range-Doppler focusing technique is the nature of the two-dimensional SAR result, affected by the layover indetermination. In order to resolve this problem, a minimum of two sensor acquisitions, separated by a baseline and extended in the crossslant-range, are needed. However, given its multi-temporal nature, these techniques are vulnerable to atmosphere and Earth environment parameters variation in addition to physical platform instabilities. Furthermore, either two radars are needed or an interferometric cycle is required (that spans from days to weeks), which makes real time DEM estimation impossible. In this work, the authors propose a novel experimental alternative to the InSAR method that uses single-pass acquisitions, using a data driven approach implemented by Deep Neural Networks. We propose a fully Convolutional Neural Network (CNN) Encoder-Decoder architecture, training it on radar images in order to estimate DEMs from single pass image acquisitions. Our results on a set of Sentinel images show that this method is able to learn to some extent the statistical properties of the DEM. The results of this exploratory analysis are encouraging and open the way to the solution of single-pass DEM estimation problem with data driven approaches.", "title": "" }, { "docid": "b0afcee1ac7ce691f60302dd8298b633", "text": "With the increase of online customer opinions in specialised websites and social networks, the necessity of automatic systems to help to organise and classify customer reviews by domain-specific aspect/categories and sentiment polarity is more important than ever. Supervised approaches for Aspect Based Sentiment Analysis obtain good results for the domain/language they are trained on, but having manually labelled data for training supervised systems for all domains and languages is usually very costly and time consuming. In this work we describe W2VLDA, an almost unsupervised system based on topic modelling, that combined with some other unsupervised methods and a minimal configuration, performs aspect/category classification, aspectterms/opinion-words separation and sentiment polarity classification for any given domain and language. We evaluate the performance of the aspect and sentiment classification in the multilingual SemEval 2016 task 5 (ABSA) dataset. We show competitive results for several languages (English, Spanish, French and Dutch) and domains (hotels, restaurants, electronic devices).", "title": "" }, { "docid": "e48941f23ee19ec4b26c4de409a84fe2", "text": "Object recognition is challenging especially when the objects from different categories are visually similar to each other. In this paper, we present a novel joint dictionary learning (JDL) algorithm to exploit the visual correlation within a group of visually similar object categories for dictionary learning where a commonly shared dictionary and multiple category-specific dictionaries are accordingly modeled. To enhance the discrimination of the dictionaries, the dictionary learning problem is formulated as a joint optimization by adding a discriminative term on the principle of the Fisher discrimination criterion. As well as presenting the JDL model, a classification scheme is developed to better take advantage of the multiple dictionaries that have been trained. The effectiveness of the proposed algorithm has been evaluated on popular visual benchmarks.", "title": "" }, { "docid": "30ef95dffecc369aabdd0ea00b0ce299", "text": "The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.", "title": "" }, { "docid": "aff1267d272bf5b4257bc83d3ef84817", "text": "Background. In the beginning of the 21st century, the world summit on population taking place in Madrid approved active ageing, WHO (2002) as the main objective of health and social policies for old people. Few studies have been done on the scientific validity of the construct. This study aims to validate the construct of active ageing and test empirically the WHO (2002) model of Active Ageing in a sample of community-dwelling seniors. Methods. 1322 old people living in the community were interviewed using an extensive assessment protocol to measure WHO's determinants of active ageing and performed an exploratory factor analysis followed by a confirmatory factor analyses. Results. We did not confirm the active ageing model, as most of the groups of determinants are either not independent or not significant. We got to a six-factor model (health, psychological component, cognitive performance, social relationships, biobehavioural component, and personality) explaining 54.6% of total variance. Conclusion. The present paper shows that there are objective as well as subjective variables contributing to active ageing and that psychological variables seem to give a very important contribute to the construct. The profile of active ageing is expected to vary between contexts and cultures and can be used to guide specific community and individually based interventions.", "title": "" }, { "docid": "db190bb0cf83071b6e19c43201f92610", "text": "In this paper, a MATLAB based simulation of Grid connected PV system is presented. The main components of this simulation are PV solar panel, Boost converter; Maximum Power Point Tracking System (MPPT) and Grid Connected PV inverter with closed loop control system is designed and simulated. A simulation studies is carried out in different solar radiation level.", "title": "" } ]
scidocsrr
323825d16ace2fa73fa5f71dcec80cae
A Deep Recurrent Collaborative Filtering Framework for Venue Recommendation
[ { "docid": "8b4e09bb13d3d01d3954f32cbb4c9e27", "text": "Higher-level semantics such as visual attributes are crucial for fundamental multimedia applications. We present a novel attribute discovery approach that can automatically identify, model and name attributes from an arbitrary set of image and text pairs that can be easily gathered on the Web. Different from conventional attribute discovery methods, our approach does not rely on any pre-defined vocabularies and human labeling. Therefore, we are able to build a large visual knowledge base without any human efforts. The discovery is based on a novel deep architecture, named Independent Component Multimodal Autoencoder (ICMAE), that can continually learn shared higher-level representations across the visual and textual modalities. With the help of the resultant representations encoding strong visual and semantic evidences, we propose to (a) identify attributes and their corresponding high-quality training images, (b) iteratively model them with maximum compactness and comprehensiveness, and (c) name the attribute models with human understandable words. To date, the proposed system has discovered 1,898 attributes over 1.3 million pairs of image and text. Extensive experiments on various real-world multimedia datasets demonstrate the quality and effectiveness of the discovered attributes, facilitating multimedia applications such as image annotation and retrieval as compared to the state-of-the-art approaches.", "title": "" }, { "docid": "923de5444a381cf63a6f601d82ecf7ac", "text": "Recommending users with preferred point-of-interests (POIs) has become an important task for location-based social networks, which facilitates users' urban exploration by helping them filter out unattractive locations. Although the influence of geographical neighborhood has been studied in the rating prediction task (i.e. regression), few work have exploited it to develop a ranking-oriented objective function to improve top-N item recommendations. To solve this task, we conduct a manual inspection on real-world datasets, and find that each individual's traits are likely to cluster around multiple centers. Hence, we propose a co-pairwise ranking model based on the assumption that users prefer to assign higher ranks to the POIs near previously rated ones. The proposed method can learn preference ordering from non-observed rating pairs, and thus can alleviate the sparsity problem of matrix factorization. Evaluation on two publicly available datasets shows that our method performs significantly better than state-of-the-art techniques for the top-N item recommendation task.", "title": "" } ]
[ { "docid": "6721ff54fde3ac49c2e0e26ae683d5a1", "text": "APT (Advanced Persistent Threat) is a genuine risk to the Internet. With the help of APT malware, attackers can remotely control infected machine and steal the personal information. DNS is well known for malware to find command and control (C&C) servers. The proposed novel system placed at the network departure guide that points toward effectively and efficiently detect APT malware infections based on malicious DNS and traffic analysis. To detect suspicious APT malware C&C domains the system utilizes malicious DNS analysis method, and afterward analyse the traffic of the comparing suspicious IP utilizing anomaly-based and signature based detection innovation. There are separated features in view of big data to describe properties of malware-related DNS. This manufactured a reputation engine to compute a score for an IP address by utilizing these elements vector together.", "title": "" }, { "docid": "ec788f48207b0a001810e1eabf6b2312", "text": "Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution.", "title": "" }, { "docid": "b3cdd76dd50bea401ede3bb945c377dc", "text": "First we report on a new threat campaign, underway in Korea, which infected around 20,000 Android users within two months. The campaign attacked mobile users with malicious applications spread via different channels, such as email attachments or SMS spam. A detailed investigation of the Android malware resulted in the identification of a new Android malware family Android/BadAccents. The family represents current state-of-the-art in mobile malware development for banking trojans. Second, we describe in detail the techniques this malware family uses and confront them with current state-of-the-art static and dynamic codeanalysis techniques for Android applications. We highlight various challenges for automatic malware analysis frameworks that significantly hinder the fully automatic detection of malicious components in current Android malware. Furthermore, the malware exploits a previously unknown tapjacking vulnerability in the Android operating system, which we describe. As a result of this work, the vulnerability, affecting all Android versions, will be patched in one of the next releases of the Android Open Source Project.", "title": "" }, { "docid": "531d387a14eefa6a8c45ad64039f29be", "text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.", "title": "" }, { "docid": "29649adbb39f182af1d84aab476ff8bf", "text": "Users of the online shopping site Amazon are encouraged to post reviews of the products that they purchase. Little attempt is made by Amazon to restrict or limit the content of these reviews. The number of reviews for different products varies, but the reviews provide accessible and plentiful data for relatively easy analysis for a range of applications. This paper seeks to apply and extend the current work in the field of natural language processing and sentiment analysis to data retrieved from Amazon. Naive Bayes and decision list classifiers are used to tag a given review as positive or negative. The number of stars a user gives a product is used as training data to perform supervised machine learning. A corpus contains 50,000 product review from 15 products serves as the dataset of study. Top selling and reviewed books on the site are the primary focus of the experiments, but useful features of them that aid in accurate classification are compared to those most useful in classification of other media products. The features, such as bag-of-words and bigrams, are compared to one another in their effectiveness in correctly tagging reviews. Errors in classification and general difficulties regarding the selection of features are analyzed and discussed.", "title": "" }, { "docid": "092d66ca2a57cf0db162565ee353850a", "text": "In contrast with the booming increase of internet data, state-of-art QA (question answering) systems, otherwise, concerned data from specific domains or resources such as search engine snippets, online forums and Wikipedia in a somewhat isolated way. Users may welcome a more general QA system for its capability to answer questions of various sources, integrated from existed specialized sub-QA engines. In this framework, question classification is the primary task. However, the current paradigms of question classification were focused on some specified type of questions, i.e. factoid questions, which are inappropriate for the general QA. In this paper, we propose a new question classification paradigm, which includes a question taxonomy suitable to the general QA and a question classifier based on MLN (Markov logic network), where rule-based methods and statistical methods are unified into a single framework in a fuzzy discriminative learning approach. Experiments show that our method outperforms traditional question classification approaches.", "title": "" }, { "docid": "26c58183e71f916f37d67f1cf848f021", "text": "With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect ® ) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 μg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, μg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 μg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 μg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 μg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, International Journal of Biomedical Engineering and Clinical Science 2016; 2(1): 8-17 9 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 μg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect ® Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.", "title": "" }, { "docid": "7a98fe4a64c17587ed09c2fa924eb018", "text": "This article describes a methodology for collecting text from the Web to match a target sublanguage both in style (register) and topic. Unlike other work that estimates n-gram statistics from page counts, the approach here is to select and filter documents, which provides more control over the type of material contributing to the n-gram counts. The data can be used in a variety of ways; here, the different sources are combined in two types of mixture models. Focusing on conversational speech where data collection can be quite costly, experiments demonstrate the positive impact of Web collections on several tasks with varying amounts of data, including Mandarin and English telephone conversations and English meetings and lectures.", "title": "" }, { "docid": "4d31eda0840ac80874a14b0a9fc2439f", "text": "We identified a patient who excreted large amounts of methylmalonic acid and malonic acid. In contrast to other patients who have been described with combined methylmalonic and malonic aciduria, our patient excreted much larger amounts of methylmalonic acid than malonic acid. Since most previous patients with this biochemical phenotype have been reported to have deficiency of malonyl-CoA decarboxylase, we assayed malonyl-CoA decarboxylase activity in skin fibroblasts derived from our patient and found the enzyme activity to be normal. We examined four isocaloric (2000 kcal/day) dietary regimes administered serially over a period of 12 days with 3 days devoted to each dietary regimen. These diets were high in carbohydrate, fat or protein, or enriched with medium-chain triglycerides. Diet-induced changes in malonic and methylmalonic acid excretion became evident 24–36 h after initiating a new diet. Total excretion of malonic and methylmalonic acid was greater (p<0.01) during a high-protein diet than during a high-carbohydrate or high-fat diet. A high-carbohydrate, low-protein diet was associated with the lowest levels of malonic and methylmalonic acid excretion. Perturbations in these metabolites were most marked at night. On all dietary regimes, our patient excreted 3–10 times more methylmalonic acid than malonic acid, a reversal of the ratios reported in patients with malonyl-CoA decarboxylase deficiency. Our data support a previous observation that combined malonicand methylmalonic aciduria has aetiologies other than malonyl-CoA decar-boxylase deficiency. The malonic acid to methylmalonic acid ratio in response to dietary intervention may be useful in identifying a subgroup of patients with normal enzyme activity.", "title": "" }, { "docid": "ecddd4f80f417dcec49021065394c89a", "text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.", "title": "" }, { "docid": "1d8b6e3c415510329fb82ec0c58cb2e6", "text": "Functional antibody delivery in living cells would enable the labelling and manipulation of intracellular antigens, which constitutes a long-thought goal in cell biology and medicine. Here we present a modular strategy to create functional cell-permeable nanobodies capable of targeted labelling and manipulation of intracellular antigens in living cells. The cell-permeable nanobodies are formed by the site-specific attachment of intracellularly stable (or cleavable) cyclic arginine-rich cell-penetrating peptides to camelid-derived single-chain VHH antibody fragments. We used this strategy for the non-endocytic delivery of two recombinant nanobodies into living cells, which enabled the relocalization of the polymerase clamp PCNA (proliferating cell nuclear antigen) and tumour suppressor p53 to the nucleolus, and thereby allowed the detection of protein-protein interactions that involve these two proteins in living cells. Furthermore, cell-permeable nanobodies permitted the co-transport of therapeutically relevant proteins, such as Mecp2, into the cells. This technology constitutes a major step in the labelling, delivery and targeted manipulation of intracellular antigens. Ultimately, this approach opens the door towards immunostaining in living cells and the expansion of immunotherapies to intracellular antigen targets.", "title": "" }, { "docid": "5b0842894cbf994c3e63e521f7352241", "text": "The burgeoning field of genomics has revived interest in multiple testing procedures by raising new methodological and computational challenges. For example, microarray experiments generate large multiplicity problems in which thousands of hypotheses are tested simultaneously. Westfall and Young (1993) propose resampling-based p-value adjustment procedures which are highly relevant to microarray experiments. This article discusses different criteria for error control in resampling-based multiple testing, including (a) the family wise error rate of Westfall and Young (1993) and (b) the false discovery rate developed by Benjamini and Hochberg (1995), both from a frequentist viewpoint; and (c) the positive false discovery rate of Storey (2002a), which has a Bayesian motivation. We also introduce our recently developed fast algorithm for implementing the minP adjustment to control family-wise error rate. Adjusted p-values for different approaches are applied to gene expression data from two recently published microarray studies. The properties of these procedures for multiple testing are compared.", "title": "" }, { "docid": "1c90adf8ec68ff52e777b2041f8bf4c4", "text": "In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.", "title": "" }, { "docid": "ce44b104aa5186c86b6acfdc07ef5f9b", "text": "Today's Enterprises are facing many challenges in the service oriented, customer experience centric and customer demand driven global environment where ICT is becoming the leading enabler and partner of the modern enterprise. In the last decade, many enterprises have invested heavily in SOA-aligned IT transformations, but not harvested what SOA promised to provide. Now the API and Microservice paradigm has emerged as the \"next big thing\" for delivering IT outcomes to support the modern enterprise, with many technology vendors and service jumping on the bandwagon. This paper undertakes a critical investigation of the key concepts around SOA, API and Microservices, identifying similarities and differences between them and dispelling the confusion and hype around them. Based on our discussion and analysis, this paper presents a set of recommendations and best practices on the effective use and management of enterprise software components, drawing upon the best of SOA, API and Microservice concepts and practice.", "title": "" }, { "docid": "831ea386dcb15a6967196b90cf3b6516", "text": "Advanced metering infrastructure (AMI) is an imperative component of the smart grid, as it is responsible for collecting, measuring, analyzing energy usage data, and transmitting these data to the data concentrator and then to a central system in the utility side. Therefore, the security of AMI is one of the most demanding issues in the smart grid implementation. In this paper, we propose an intrusion detection system (IDS) architecture for AMI which will act as a complimentary with other security measures. This IDS architecture consists of three local IDSs placed in smart meters, data concentrators, and central system (AMI headend). For detecting anomaly, we use data stream mining approach on the public KDD CUP 1999 data set for analysis the requirement of the three components in AMI. From our result and analysis, it shows stream data mining technique shows promising potential for solving security issues in AMI.", "title": "" }, { "docid": "93e2f88d13fc69fc11cd70fbe9685c2f", "text": "(1) Robotics and Automation Laboratory, Department of Mechanical Engineering, Faculty of Engineering, Chulalongkorn University, Phayathai Rd. Prathumwan, Bangkok 10330 http://161.200.80.142/mech, Thailand Abstract A method for kinematics modeling of a six-wheel Rocker-Bogie mobile robot is described in detail. The forward kinematics is derived by using wheel Jacobian matrices in conjunction with wheel-ground contact angle estimation. The inverse kinematics is to obtain the wheel velocities and steering angles from the desired forward velocity and turning rate of the robot. Traction Control is also developed to improve traction by comparing information from onboard sensors and wheel velocities to minimize wheel slip. Finally, a simulation of a small robot using rockerbogie suspension has been performed and simulate in two conditions of surfaces including climbing slope and travel over a ditch.", "title": "" }, { "docid": "04c31483ace237a2f00e3478d9432d10", "text": "IMPORTANCE\nIatrogenic occlusion of the ophthalmic artery and its branches is a rare but devastating complication of cosmetic facial filler injections.\n\n\nOBJECTIVE\nTo investigate clinical and angiographic features of iatrogenic occlusion of the ophthalmic artery and its branches caused by cosmetic facial filler injections.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nData from 44 patients with occlusion of the ophthalmic artery and its branches after cosmetic facial filler injections were obtained retrospectively from a national survey completed by members of the Korean Retina Society from 27 retinal centers. Clinical features were compared between patients grouped by angiographic findings and injected filler material.\n\n\nMAIN OUTCOMES AND MEASURES\nVisual prognosis and its relationship to angiographic findings and injected filler material.\n\n\nRESULTS\nOphthalmic artery occlusion was classified into 6 types according to angiographic findings. Twenty-eight patients had diffuse retinal and choroidal artery occlusions (ophthalmic artery occlusion, generalized posterior ciliary artery occlusion, and central retinal artery occlusion). Sixteen patients had localized occlusions (localized posterior ciliary artery occlusion, branch retinal artery occlusion, and posterior ischemic optic neuropathy). Patients with diffuse occlusions showed worse initial and final visual acuity and less visual gain compared with those having localized occlusions. Patients receiving autologous fat injections (n = 22) had diffuse ophthalmic artery occlusions, worse visual prognosis, and a higher incidence of combined brain infarction compared with patients having hyaluronic acid injections (n = 13).\n\n\nCONCLUSIONS AND RELEVANCE\nClinical features of iatrogenic occlusion of the ophthalmic artery and its branches following cosmetic facial filler injections were diverse according to the location and extent of obstruction and the injected filler material. Autologous fat injections were associated with a worse visual prognosis and a higher incidence of combined cerebral infarction. Extreme caution and care should be taken during these injections, and physicians should be aware of a diverse spectrum of complications following cosmetic facial filler injections.", "title": "" }, { "docid": "370ec5c556b70ead92bc45d1f419acaf", "text": "Despite the identification of circulating tumor cells (CTCs) and cell-free DNA (cfDNA) as potential blood-based biomarkers capable of providing prognostic and predictive information in cancer, they have not been incorporated into routine clinical practice. This resistance is due in part to technological limitations hampering CTC and cfDNA analysis, as well as a limited understanding of precisely how to interpret emergent biomarkers across various disease stages and tumor types. In recognition of these challenges, a group of researchers and clinicians focused on blood-based biomarker development met at the Canadian Cancer Trials Group (CCTG) Spring Meeting in Toronto, Canada on 29 April 2016 for a workshop discussing novel CTC/cfDNA technologies, interpretation of data obtained from CTCs versus cfDNA, challenges regarding disease evolution and heterogeneity, and logistical considerations for incorporation of CTCs/cfDNA into clinical trials, and ultimately into routine clinical use. The objectives of this workshop included discussion of the current barriers to clinical implementation and recent progress made in the field, as well as fueling meaningful collaborations and partnerships between researchers and clinicians. We anticipate that the considerations highlighted at this workshop will lead to advances in both basic and translational research and will ultimately impact patient management strategies and patient outcomes.", "title": "" }, { "docid": "d050730d7a5bd591b805f1b9729b0f2d", "text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.", "title": "" } ]
scidocsrr
a0ad90d38c6697bfa454df200494bca4
3D segmentation of mandible from multisectional CT scans by convolutional neural networks
[ { "docid": "4d2be7aac363b77c6abd083947bc28c7", "text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.", "title": "" } ]
[ { "docid": "1e82d6acef7e5b5f0c2446d62cf03415", "text": "The purpose of this research is to characterize and model the self-heating effect of multi-finger n-channel MOSFETs. Self-heating effect (SHE) does not need to be analyzed for single-finger bulk CMOS devices. However, it should be considered for multi-finger n-channel MOSFETs that are mainly used for RF-CMOS applications. The SHE mechanism was analyzed based on a two-dimensional device simulator. A compact model, which is a BSIM6 model with additional equations, was developed and implemented in a SPICE simulator with Verilog-A language. Using the proposed model and extracted parameters excellent agreements have been obtained between measurements and simulations in DC and S-parameter domain whereas the original BSIM6 shows inconsistency between static DC and small signal AC simulations due to the lack of SHE. Unlike the generally-used sub-circuits based SHE models including in BSIMSOI models, the proposed SHE model can converge in large scale circuits.", "title": "" }, { "docid": "4e7f0bebe643092b4e63a06987411a5d", "text": "Prenatal ultrasound is an integral part of caring for pregnant women in the United States. Although surprisingly few data exist to support the clinical benefit of screening ultrasound during pregnancy, its use continues to rise. Urologic anomalies are among the most commonly identified, with overall detection sensitivity approaching 90%. Prenatal hydronephrosis is the most frequently identified finding and predicting postnatal pathology based on its presence can be difficult. As the degree of fetal hydronephrosis increases so does the risk of true urinary tract pathology. Diagnoses that require more urgent care include causes of lower urinary tract obstruction and bladder and cloacal exstrophy.", "title": "" }, { "docid": "7f988f0bed497857eac00dd8781a2158", "text": "BACKGROUND/PURPOSE\nHigh-intensity focused ultrasound (HIFU) has been used for skin tightening. However, there is a rising concern of irreversible adverse effects. Our aim was to evaluate the depth of thermal injury zone after HIFU energy passes through different condition.\n\n\nMATERIALS AND METHODS\nTo analyze the consistency of the HIFU device, phantom tests were performed. Simulations were performed on ex vivo porcine tissues to estimate the area of the thermal coagulation point (TCP) according to the applied energy and skin condition. The experiment was designed in three orientations: normal direction (from epidermis to fascia), reverse direction (from fascia to epidermis), and normal direction without epidermis.\n\n\nRESULTS\nThe TCP was larger and wider depending on the applied fluence and handpieces (HPs). When we measured TCP in different directions, the measured area in the normal direction was more superficially located than that in the reverse direction. The depth of the TCP in the porcine skin without epidermis was detected at 130% deeper than in skin with an intact epidermis.\n\n\nCONCLUSION\nThe affected area by HIFU is dependent on the skin condition and the characteristics of the HP and applied fluence. Considerations of these factors may be the key to minimize the unwanted adverse effects.", "title": "" }, { "docid": "235ed0d7a20b67e227db9e35a3865d2b", "text": "convolutional neural networks are the most widely used deep learning algorithms for traffic signal classification till date[1] but they fail to capture pose, view, orientation of the images because of the intrinsic inability of max pooling layer.This paper proposes a novel method for Traffic sign detection using deep learning architecture called capsule networks that achieves outstanding performance on the German traffic sign dataset.Capsule network consists of capsules which are a group of neurons representing the instantiating parameters of an object like the pose and orientation[2] by using the dynamic routing and route by agreement algorithms.unlike the previous approaches of manual feature extraction,multiple deep neural networks with many parameters,our method eliminates the manual effort and provides resistance to the spatial variances.CNNs ́ can be fooled easily using various adversary attacks[3] and capsule networks can overcome such attacks from the intruders and can offer more reliability in traffic sign detection for autonomous vehicles.Capsule network have achieved the state-of-the-art accuracy of 97.6% on German Traffic Sign Recognition Benchmark dataset (GTSRB).", "title": "" }, { "docid": "904454a191da497071ee9b835561c6e6", "text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.", "title": "" }, { "docid": "edb5b733e77271dd4e1afaf742388a68", "text": "The Intolerance of Uncertainty Model was initially developed as an explanation for worry within the context of generalized anxiety disorder. However, recent research has identified intolerance of uncertainty (IU) as a possible transdiagnostic maintaining factor across the anxiety disorders and depression. The aim of this study was to determine whether IU mediated the relationship between neuroticism and symptoms related to various anxiety disorders and depression in a treatment-seeking sample (N=328). Consistent with previous research, IU was significantly associated with neuroticism as well as with symptoms of social phobia, panic disorder and agoraphobia, obsessive-compulsive disorder, generalized anxiety disorder, and depression. Moreover, IU explained unique variance in these symptom measures when controlling for neuroticism. Mediational analyses showed that IU was a significant partial mediator between neuroticism and all symptom measures, even when controlling for symptoms of other disorders. More specifically, anxiety in anticipation of future uncertainty (prospective anxiety) partially mediated the relationship between neuroticism and symptoms of generalized anxiety disorder (i.e. worry) and obsessive-compulsive disorder, whereas inaction in the face of uncertainty (inhibitory anxiety) partially mediated the relationship between neuroticism and symptoms of social anxiety, panic disorder and agoraphobia, and depression. Sobel's test demonstrated that all hypothesized meditational pathways were associated with significant indirect effects, although the mediation effect was stronger for worry than other symptoms. Potential implications of these findings for the treatment of anxiety disorders and depression are discussed.", "title": "" }, { "docid": "9cf4d68ab09e98cd5b897308c8791d26", "text": "Gesture Recognition Technology has evolved greatly over the years. The past has seen the contemporary Human – Computer Interface techniques and their drawbacks, which limit the speed and naturalness of the human brain and body. As a result gesture recognition technology has developed since the early 1900s with a view to achieving ease and lessening the dependence on devices like keyboards, mice and touchscreens. Attempts have been made to combine natural gestures to operate with the technology around us to enable us to make optimum use of our body gestures making our work faster and more human friendly. The present has seen huge development in this field ranging from devices like virtual keyboards, video game controllers to advanced security systems which work on face, hand and body recognition techniques. The goal is to make full use of the movements of the body and every angle made by the parts of the body in order to supplement technology to become human friendly and understand natural human behavior and gestures. The future of this technology is very bright with prototypes of amazing devices in research and development to make the world equipped with digital information at hand whenever and wherever required.", "title": "" }, { "docid": "d67d126d40af2f23b001e2cbf2a2df30", "text": "Our method for multi-lingual geoparsing uses monolingual tools and resources along with machine translation and alignment to return location words in many languages. Not only does our method save the time and cost of developing geoparsers for each language separately, but also it allows the possibility of a wide range of language capabilities within a single interface. We evaluated our method in our LanguageBridge prototype on location named entities using newswire, broadcast news and telephone conversations in English, Arabic and Chinese data from the Linguistic Data Consortium (LDC). Our results for geoparsing Chinese and Arabic text using our multi-lingual geoparsing method are comparable to our results for geoparsing English text with our English tools. Furthermore, experiments using our machine translation approach results in accuracy comparable to results from the same data that was translated manually.", "title": "" }, { "docid": "e0f7f087a4d8a33c1260d4ed0558edc3", "text": "In this review paper, it is intended to summarize and compare the methods of automatic detection of microcalcifications in digitized mammograms used in various stages of the Computer Aided Detection systems (CAD). In particular, the pre processing and enhancement, bilateral subtraction techniques, segmentation algorithms, feature extraction, selection and classification, classifiers, Receiver Operating Characteristic (ROC); Free-response Receiver Operating Characteristic (FROC) analysis and their performances are studied and compared.", "title": "" }, { "docid": "5b41a7c287b54b16e9d791cb62d7aa5a", "text": "Recent evidence demonstrates that children are selective in their social learning, preferring to learn from a previously accurate speaker than from a previously inaccurate one. We examined whether children assessing speakers' reliability take into account how speakers achieved their prior accuracy. In Study 1, when faced with two accurate informants, 4- and 5-year-olds (but not 3-year-olds) were more likely to seek novel information from an informant who had previously given the answers unaided than from an informant who had always relied on help from a third party. Similarly, in Study 2, 4-year-olds were more likely to trust the testimony of an unaided informant over the testimony provided by an assisted informant. Our results indicate that when children reach around 4 years of age, their selective trust extends beyond simple generalizations based on informants' past accuracy to a more sophisticated selectivity that distinguishes between truly knowledgeable informants and merely accurate informants who may not be reliable in the long term.", "title": "" }, { "docid": "e62b7803898420e961a3d49efe5e1958", "text": "In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset as well as on the traditional Labeled Face in the Wild (LFW) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the LFW and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace dataset. Results of experimental evaluations on the IJB-A and the LFW datasets are provided.", "title": "" }, { "docid": "6816bb15dba873244306f22207525bee", "text": "Imbalance suggests a feeling of dynamism and movement in static objects. It is therefore not surprising that many 3D models stand in impossibly balanced configurations. As long as the models remain in a computer this is of no consequence: the laws of physics do not apply. However, fabrication through 3D printing breaks the illusion: printed models topple instead of standing as initially intended. We propose to assist users in producing novel, properly balanced designs by interactively deforming an existing model. We formulate balance optimization as an energy minimization, improving stability by modifying the volume of the object, while preserving its surface details. This takes place during interactive editing: the user cooperates with our optimizer towards the end result. We demonstrate our method on a variety of models. With our technique, users can produce fabricated objects that stand in one or more surprising poses without requiring glue or heavy pedestals.", "title": "" }, { "docid": "ae9770179419eb898f944725d8f2165c", "text": "Cloud computing adoption has represented a big challenge for all kinds of companies all over the world. The challenge involves such questions as where to start, which provider should the company choose or whether it is even worthwhile. With a constantly changing economic environment, businesses must assess current technologies and offerings to remain competitive. The possibility of migrating a company's services and infrastructure to the cloud may seem attractive. However, without proper guidance, the results may not be as expected, leading a loss of time and money. As each company has its own needs and requirements, industry-focused frameworks have been proposed (e.g. for educational or governmental institutions). Although these frameworks are useful, they are not applicable to every business. Hence, a generic, widely-applicable and implementable cloud computing adoption framework is proposed in this paper. It takes the best outcomes of previous studies, best-practice suggestions, as well as authors' additions, and sums them up into a more robust, unified framework. The framework consists of 6 detailed phases carrying the user from knowing the company's current state to successfully migrating the data, services, and infrastructure to the cloud. These steps are intended to help IT directors and other decision-makers to reduce risks and maximize benefits throughout the cloud computing adoption process. Data security risks are not discussed in this paper as other authors have already sufficiently studied them. This framework was developed from a business perspective.", "title": "" }, { "docid": "04165f38c90c84e17d87bb4ac7f43f37", "text": "Globalisation is becoming a force that is revolutionising international trade, particularly that of animals and animal products. There is increasing interest in animal welfare worldwide, and as part of its 2001-2005 Strategic Plan the World Organisation for Animal Health (OIE) identified the development of international standards on animal welfare as a priority. The OIE's scientific approach to standard-setting provides the foundation for the development, and acceptance by all OIE Member Countries, of these animal welfare guidelines. The paper discusses how these guidelines on animal welfare can be implemented, both within the provisions of World Trade Organization (WTO) agreements and within the framework of voluntary codes of conduct. Even if animal welfare guidelines are not covered by any WTO agreements in the future, bi- and multilateral agreements, voluntary corporate codes, and transparent labelling of products should result in a progressive acceptance of OIE guidelines. Ultimately, consumer demands and demonstrable gains in animal production will result in an incremental evolution in animal welfare consciousness and adherence to international standards.", "title": "" }, { "docid": "1fc6b2ffedfddb0dc476c3470c52fb13", "text": "Exponential growth in Electronic Healthcare Records (EHR) has resulted in new opportunities and urgent needs for discovery of meaningful data-driven representations and patterns of diseases in Computational Phenotyping research. Deep Learning models have shown superior performance for robust prediction in computational phenotyping tasks, but suffer from the issue of model interpretability which is crucial for clinicians involved in decision-making. In this paper, we introduce a novel knowledge-distillation approach called Interpretable Mimic Learning, to learn interpretable phenotype features for making robust prediction while mimicking the performance of deep learning models. Our framework uses Gradient Boosting Trees to learn interpretable features from deep learning models such as Stacked Denoising Autoencoder and Long Short-Term Memory. Exhaustive experiments on a real-world clinical time-series dataset show that our method obtains similar or better performance than the deep learning models, and it provides interpretable phenotypes for clinical decision making.", "title": "" }, { "docid": "5305e147b2aa9646366bc13deb0327b0", "text": "This longitudinal case-study aimed at examining whether purposely teaching for the promotion of higher order thinking skills enhances students’ critical thinking (CT), within the framework of science education. Within a pre-, post-, and post–post experimental design, high school students, were divided into three research groups. The experimental group (n=57) consisted of science students who were exposed to teaching strategies designed for enhancing higher order thinking skills. Two other groups: science (n=41) and non-science majors (n=79), were taught traditionally, and acted as control. By using critical thinking assessment instruments, we have found that the experimental group showed a statistically significant improvement on critical thinking skills components and disposition towards critical thinking subscales, such as truth-seeking, open-mindedness, self-confidence, and maturity, compared with the control groups. Our findings suggest that if teachers purposely and persistently practice higher order thinking strategies for example, dealing in class with real-world problems, encouraging open-ended class discussions, and fostering inquiry-oriented experiments, there is a good chance for a consequent development of critical thinking capabilities.", "title": "" }, { "docid": "64dc61e853f41654dba602c7362546b5", "text": "This paper introduces our work on the communication stack of wireless sensor networks. We present the IPv6 approach for wireless sensor networks called 6LoWPAN in its IETF charter. We then compare the different implementations of 6LoWPAN subsets for several sensor nodes platforms. We present our approach for the 6LoWPAN implementation which aims to preserve the advantages of modularity while keeping a small memory footprint and a good efficiency.", "title": "" }, { "docid": "605a078c74d37007654094b4b426ece8", "text": "Currently, blockchain technology, which is decentralized and may provide tamper-resistance to recorded data, is experiencing exponential growth in industry and research. In this paper, we propose the MIStore, a blockchain-based medical insurance storage system. Due to blockchain’s the property of tamper-resistance, MIStore may provide a high-credibility to users. In a basic instance of the system, there are a hospital, patient, insurance company and n servers. Specifically, the hospital performs a (t, n)-threshold MIStore protocol among the n servers. For the protocol, any node of the blockchain may join the protocol to be a server if the node and the hospital wish. Patient’s spending data is stored by the hospital in the blockchain and is protected by the n servers. Any t servers may help the insurance company to obtain a sum of a part of the patient’s spending data, which servers can perform homomorphic computations on. However, the n servers cannot learn anything from the patient’s spending data, which recorded in the blockchain, forever as long as more than n − t servers are honest. Besides, because most of verifications are performed by record-nodes and all related data is stored at the blockchain, thus the insurance company, servers and the hospital only need small memory and CPU. Finally, we deploy the MIStore on the Ethererum blockchain and give the corresponding performance evaluation.", "title": "" }, { "docid": "5d934dd45e812336ad12cee90d1e8cdf", "text": "As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "57e5d801778711f2ab9a152f08ae53e8", "text": "A modular multilevel converter (MMC) is one of the next-generation multilevel PWM converters intended for high- or medium-voltage power conversion without transformers. The MMC consists of cascade connection of multiple bidirectional PWM chopper-cells and floating dc capacitors per leg, thus requiring voltage-balancing control of their chopper-cells. However, no paper has been discussed explicitly on voltage-balancing control with theoretical and experimental verifications. This paper deals with two types of modular multilevel PWM converters with focus on their circuit configurations and voltage-balancing control. Combination of averaging and balancing controls enables the MMCs to achieve voltage balancing without any external circuit. The viability of the MMCs as well as the effectiveness of the PWM control method is confirmed by simulation and experiment.", "title": "" } ]
scidocsrr
c7d6b4cff4182b4dfa6c09834d38cf31
The standard of user-centered design and the standard definition of usability: analyzing ISO 13407 against ISO 9241-11
[ { "docid": "9eaf39d4b612c3bd272498eb8a91effc", "text": "The relationship between the different approaches to quality in ISO standards is reviewed, contrasting the manufacturing approach to quality in ISO 9000 (quality is conformance to requirements) with the product orientation of ISO 8402 (quality is the presence of specified features) and the goal orientation of quality in use in ISO 14598-1 (quality is meeting user needs). It is shown how ISO 9241-11 enables quality in use to be measured, and ISO 13407 defines the activities necessary in the development lifecycle for achieving quality in use. APPROACHES TO QUALITY Although the term quality seems self-explanatory in everyday usage, in practice there are many different views of what it means and how it should be achieved as part of a software production process. ISO DEFINITIONS OF QUALITY ISO 9000 is concerned with quality assurance to provide confidence that a product will satisfy given requirements. Interpreted literally, this puts quality in the hands of the person producing the requirements specification a product may be deemed to have quality even if the requirements specification is inappropriate. This is one of the interpretations of quality reviewed by Garvin (1984). He describes it as Manufacturing quality: a product which conforms to specified requirements. A different emphasis is given in ISO 8402 which defines quality as the totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. This is an example of what Garvin calls Product quality: an inherent characteristic of the product determined by the presence or absence of measurable product attributes. Many organisations would like to be able to identify those attributes which can be designed into a product or evaluated to ensure quality. ISO 9126 (1992) takes this approach, and categorises the attributes of software quality as: functionality, efficiency, usability, reliability, maintainability and portability. To the extent that user needs are well-defined and common to the intended users this implies that quality is an inherent attribute of the product. However, if different groups of users have different needs, then they may require different characteristics for a product to have quality for their purposes. Assessment of quality thus becomes dependent on the perception of the user. USER PERCEIVED QUALITY AND QUALITY IN USE Garvin defines User perceived quality as the combination of product attributes which provide the greatest satisfaction to a specified user. Most approaches to quality do not deal explicitly with userperceived quality. User-perceived quality is regarded as an intrinsically inaccurate judgement of product quality. For instance Garvin, 1984, observes that \"Perceptions of quality can be as subjective as assessments of aesthetics\". However, there is a more fundamental reason for being concerned with user-perceived quality. Products can only have quality in relation to their intended purpose. For instance, the quality attributes required of an office carpet may be very different from those required of a bedroom carpet. For conventional products this is assumed to be selfevident. For general-purpose products it creates a problem. A text editor could be used by programmers for producing code, or by secretaries for producing letters. Some of the quality attributes required will be the same, but others will be different. Even for a word processor, the functionality, usability and efficiency attributes required by a trained user may be very different from those required by an occasional user. Reconciling work on usability with traditional approaches to software quality has led to another broader and potentially important view of quality which has been outside the scope of most existing quality systems. This embraces userperceived quality by relating quality to the needs of the user of an interactive product. ISO 14598-1 defines External quality as the extent to which a product satisfies stated and implied needs when used under specified conditions. This moves the focus of quality from the product in isolation to the satisfaction of the needs of particular users in particular situations. The purpose of a product is to help users achieve particular goals, which leads to the definition of Quality in use in ISO DIS 14598-1 as the effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in specified environments. A product meets the requirements of the user if it is effective (accurate and complete), efficient in use of time and resources, and satisfying, regardless of the specific attributes it possesses. Specifying requirements in terms of performance has many benefits. This is recognised in the rules for drafting ISO standards (ISO, 1992) which suggest that to provide design flexibility, standards should specify the performance required of a product rather than the technical attributes needed to achieve the performance. Quality in use is a means of applying this principle to the performance which a product enables a human to achieve. An example is the ISO standard for VDT display screens (ISO 9241-3). The purpose of the standard is to ensure that the screen has the technical attributes required to achieve quality in use. The current version of the standard is specified in terms of the technical attributes of a traditional CRT. It is intended to extend the standard to permit alternative new technology screens to conform if it can be demonstrated that users are as effective, efficient and satisfied with the new screen as with an existing screen which meets the technical specifications. SOFTWARE QUALITY IN USE: ISO 14598-1 The purpose of designing an interactive system is to meet the needs of users: to provide quality in use (see Figure 1, from ISO/IEC 14598-1). The internal software attributes will determine the quality of a software product in use in a particular context. Software quality attributes are the cause, quality in use the effect. Quality in use is (or at least should be) the objective, software product quality is the means of achieving it. system behaviour external quality requirements External quality internal quality requirements Internal quality software attributes Specification Design and development Needs Quality in use Operation", "title": "" } ]
[ { "docid": "be0d51871cad4912dcfa05f1edfec3f5", "text": "Peripheral information is information that is not central to a person's current task, but provides the person the opportunity to learn more, to do a better job, or to keep track of less important tasks. Though peripheral information displays are ubiquitous, they have been rarely studied. For computer users, a common peripheral display is a scrolling text display that provides announcements, sports scores, stock prices, or other news. In this paper, we investigate how to design peripheral displays so that they provide the most information while having the least impact on the user's performance on the main task. We report a series of experiments on scrolling displays aimed at examining tradeoffs between distraction of scrolling motion and memorability of information displayed. Overall, we found that continuously scrolling displays are more distracting than displays that start and stop, but information in both is remembered equally well. These results are summarized in a set of design recommendations.", "title": "" }, { "docid": "e8c7f00d775254bd6b8c5393397d05a6", "text": "PURPOSE\nVirtual reality devices, including virtual reality head-mounted displays, are becoming increasingly accessible to the general public as technological advances lead to reduced costs. However, there are numerous reports that adverse effects such as ocular discomfort and headache are associated with these devices. To investigate these adverse effects, questionnaires that have been specifically designed for other purposes such as investigating motion sickness have often been used. The primary purpose of this study was to develop a standard questionnaire for use in investigating symptoms that result from virtual reality viewing. In addition, symptom duration and whether priming subjects elevates symptom ratings were also investigated.\n\n\nMETHODS\nA list of the most frequently reported symptoms following virtual reality viewing was determined from previously published studies and used as the basis for a pilot questionnaire. The pilot questionnaire, which consisted of 12 nonocular and 11 ocular symptoms, was administered to two groups of eight subjects. One group was primed by having them complete the questionnaire before immersion; the other group completed the questionnaire postviewing only. Postviewing testing was carried out immediately after viewing and then at 2-min intervals for a further 10 min.\n\n\nRESULTS\nPriming subjects did not elevate symptom ratings; therefore, the data were pooled and 16 symptoms were found to increase significantly. The majority of symptoms dissipated rapidly, within 6 min after viewing. Frequency of endorsement data showed that approximately half of the symptoms on the pilot questionnaire could be discarded because <20% of subjects experienced them.\n\n\nCONCLUSIONS\nSymptom questionnaires to investigate virtual reality viewing can be administered before viewing, without biasing the findings, allowing calculation of the amount of change from pre- to postviewing. However, symptoms dissipate rapidly and assessment of symptoms needs to occur in the first 5 min postviewing. Thirteen symptom questions, eight nonocular and five ocular, were determined to be useful for a questionnaire specifically related to virtual reality viewing using a head-mounted display.", "title": "" }, { "docid": "6b79d1db9565fc7540d66ff8bf5aae1f", "text": "Recognizing sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and real world facts. Most of the current sarcasm detection systems consider only the utterance in isolation. There are some limited attempts toward taking into account the conversational context. In this paper, we propose an interpretable end-to-end model that combines information from both the utterance and the conversational context to detect sarcasm, and demonstrate its effectiveness through empirical evaluations. We also study the behavior of the proposed model to provide explanations for the model’s decisions. Importantly, our model is capable of determining the impact of utterance and conversational context on the model’s decisions. Finally, we provide an ablation study to illustrate the impact of different components of the proposed model.", "title": "" }, { "docid": "e34c102bf9c690e394ce7e373128be10", "text": "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a marginbased objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.", "title": "" }, { "docid": "a36fae7ccd3105b58a4977b5a2366ee8", "text": "As the number of big data management systems continues to grow, users increasingly seek to leverage multiple systems in the context of a single data analysis task. To efficiently support such hybrid analytics, we develop a tool called PipeGen for efficient data transfer between database management systems (DBMSs). PipeGen automatically generates data pipes between DBMSs by leveraging their functionality to transfer data via disk files using common data formats such as CSV. PipeGen creates data pipes by extending such functionality with efficient binary data transfer capabilities that avoid file system materialization, include multiple important format optimizations, and transfer data in parallel when possible. We evaluate our PipeGen prototype by generating 20 data pipes automatically between five different DBMSs. The results show that PipeGen speeds up data transfer by up to 3.8× as compared to transferring using disk files.", "title": "" }, { "docid": "c6576bb8585fff4a9ac112943b1e0785", "text": "Three-dimensional (3D) kinematic models are widely-used in videobased figure tracking. We show that these models can suffer from singularities when motion is directed along the viewing axis of a single camera. The single camera case is important because it arises in many interesting applications, such as motion capture from movie footage, video surveillance, and vision-based user-interfaces. We describe a novel two-dimensional scaled prismatic model (SPM) for figure registration. In contrast to 3D kinematic models, the SPM has fewer singularity problems and does not require detailed knowledge of the 3D kinematics. We fully characterize the singularities in the SPM and demonstrate tracking through singularities using synthetic and real examples. We demonstrate the application of our model to motion capture from movies. Fred Astaire is tracked in a clip from the film “Shall We Dance”. We also present the use of monocular hand tracking in a 3D user-interface. These results demonstrate the benefits of the SPM in tracking with a single source of video. KEY WORDS—AUTHOR: PLEASE PROVIDE", "title": "" }, { "docid": "7032e1ea76108b005d5303152c1eb365", "text": "We investigate the effect of social media content on customer engagement using a large-scale field study on Facebook. We content-code more than 100,000 unique messages across 800 companies engaging with users on Facebook using a combination of Amazon Mechanical Turk and state-of-the-art Natural Language Processing algorithms. We use this large-scale database of content attributes to test the effect of social media marketing content on subsequent user engagement − defined as Likes and comments − with the messages. We develop methods to account for potential selection biases that arise from Facebook’s filtering algorithm, EdgeRank, that assigns messages non-randomly to users. We find that inclusion of persuasive content − like emotional and philanthropic content − increases engagement with a message. We find that informative content − like mentions of prices, availability, and product features − reduce engagement when included in messages in isolation, but increase engagement when provided in combination with persuasive attributes. Persuasive content thus seems to be the key to effective engagement. Our results inform content design strategies in social media, and the methodology we develop to content-code large-scale textual data provides a framework for future studies on unstructured natural language data such as advertising content or product reviews.", "title": "" }, { "docid": "f14eeb6dff3f865bc65427210dd49aae", "text": "Although the most intensively studied mammalian olfactory system is that of the mouse, in which olfactory chemical cues of one kind or another are detected in four different nasal areas [the main olfactory epithelium (MOE), the septal organ (SO), Grüneberg's ganglion, and the sensory epithelium of the vomeronasal organ (VNO)], the extraordinarily sensitive olfactory system of the dog is also an important model that is increasingly used, for example in genomic studies of species evolution. Here we describe the topography and extent of the main olfactory and vomeronasal sensory epithelia of the dog, and we report finding no structures equivalent to the Grüneberg ganglion and SO of the mouse. Since we examined adults, newborns, and fetuses we conclude that these latter structures are absent in dogs, possibly as the result of regression or involution. The absence of a vomeronasal component based on VR2 receptors suggests that the VNO may be undergoing a similar involutionary process.", "title": "" }, { "docid": "670e3f4fdb4a66de74ae740ae19aa260", "text": "The adsorption and desorption of D2O on hydrophobic activated carbon fiber (ACF) occurs at a smaller pressure than the adsorption and desorption of H2O. The behavior of the critical desorption pressure difference between D2O and H2O in the pressure range of 1.25-1.80kPa is applied to separate low concentrated D2O from water using the hydrophobic ACF, because the desorption branches of D2O and H2O drop almost vertically. The deuterium concentration of all desorbed water in the above pressure range is lower than that of water without adsorption-treatment on ACF. The single adsorption-desorption procedure on ACF at 1.66kPa corresponding to the maximum difference of adsorption amount between D2O and H2O reduced the deuterium concentration of desorbed water to 130.6ppm from 143.0ppm. Thus, the adsorption-desorption procedure of water on ACF is a promising separation and concentration method of low concentrated D2O from water.", "title": "" }, { "docid": "6ce2991a68c7d4d6467ff2007badbaf0", "text": "This paper investigates acoustic models for automatic speech recognition (ASR) using deep neural networks (DNNs) whose input is taken directly from windowed speech waveforms (WSW). After demonstrating the ability of these networks to automatically acquire internal representations that are similar to mel-scale filter-banks, an investigation into efficient DNN architectures for exploiting WSW features is performed. First, a modified bottleneck DNN architecture is investigated to capture dynamic spectrum information that is not well represented in the time domain signal. Second,the redundancies inherent in WSW based DNNs are considered. The performance of acoustic models defined over WSW features is compared to that obtained from acoustic models defined over mel frequency spectrum coefficient (MFSC) features on the Wall Street Journal (WSJ) speech corpus. It is shown that using WSW features results in a 3.0 percent increase in WER relative to that resulting from MFSC features on the WSJ corpus. However, when combined with MFSC features, a reduction in WER of 4.1 percent is obtained with respect to the best evaluated MFSC based DNN acoustic model.", "title": "" }, { "docid": "09dc119ce0ca5765e8eea43cb3bf6b68", "text": "The traditional approach to assessing the face is to consider the face in thirds (upper, middle, and lower thirds). While useful, this approach limits conceptualization, as it is not based on the function of the face. From a functional perspec­ tive, the face has an anterior aspect and a lateral aspect. The anterior face is highly evolved beyond the basic survival needs, specifically, for communication and facial expression. In contrast, the lateral face predominantly covers the struc­ tures of mastication. A vertical line descending from the lateral orbital rim is the approximate division between the anterior and lateral zones of the face. Internally, a series of facial retaining ligaments are strategically located along this line to demarcate the anterior from the lateral face (Fig. 6.1). The mimetic muscles of the face are located in the superficial fascia of the anterior face, mostly around the eyes and the mouth. This highly mobile area of the face is designed to allow fine movement and is prone to develop laxity with aging. In contrast, the lateral face is relatively immobile as it overlies the structures to do with mastication, the temporalis, masseter, the parotid gland and its duct, all located deep to the deep fascia. The only superficial muscle in the lateral face is the platysma in the lower third, which extends to the level of the oral commissure. Importantly, the soft tissues of the anterior face are subdi­ vided into two parts; that which overlies the skeleton and the larger part that comprises the highly specialized sphincters overlying the bony cavities. Where the soft tissues overlie the orbital and oral cavities they are modified, as there is no deep S Y N O P S I S", "title": "" }, { "docid": "a3aa869de6c0e008e1d354197d0760cd", "text": "BACKGROUND\nWhile the cognitive theory of obsessive-compulsive disorder (OCD) is one of the most widely accepted accounts of the maintenance of the disorder in adults, no study to date has systematically evaluated the theory across children, adolescence and adults with OCD.\n\n\nMETHOD\nThis paper investigated developmental differences in the cognitive processing of threat in a sample of children, adolescents and adults with OCD. Using an idiographic assessment approach, as well as self-report questionnaires, this study evaluated cognitive appraisals of responsibility, probability, severity, thought-action fusion (TAF), thought-suppression, self-doubt and cognitive control. It was hypothesised that there would be age related differences in reported responsibility for harm, probability of harm, severity of harm, thought suppression, TAF, self-doubt and cognitive control.\n\n\nRESULTS\nResults of this study demonstrated that children with OCD reported experiencing fewer intrusive thoughts, which were less distressing and less uncontrollable than those experienced by adolescents and adults with OCD. Furthermore, responsibility attitudes, probability biases and thought suppression strategies were higher in adolescents and adults with OCD. Cognitive processes of TAF, perceived severity of harm, self-doubt and cognitive control were found to be comparable across age groups.\n\n\nCONCLUSIONS\nThese results suggest that the current cognitive theory of OCD needs to address developmental differences in the cognitive processing of threat. Furthermore, for a developmentally sensitive theory of OCD, further investigation is warranted into other possible age related maintenance factors. Implications of this investigation and directions for future research are discussed.", "title": "" }, { "docid": "e22495967c8ed452f552ab79fa7333be", "text": "Recently developed object detectors employ a convolutional neural network (CNN) by gradually increasing the number of feature layers with a pyramidal shape instead of using a featurized image pyramid. However, the different abstraction levels of CNN feature layers often limit the detection performance, especially on small objects. To overcome this limitation, we propose a CNN-based object detection architecture, referred to as a parallel feature pyramid (FP) network (PFPNet), where the FP is constructed by widening the network width instead of increasing the network depth. First, we adopt spatial pyramid pooling and some additional feature transformations to generate a pool of feature maps with different sizes. In PFPNet, the additional feature transformation is performed in parallel, which yields the feature maps with similar levels of semantic abstraction across the scales. We then resize the elements of the feature pool to a uniform size and aggregate their contextual information to generate each level of the final FP. The experimental results confirmed that PFPNet increases the performance of the latest version of the single-shot multi-box detector (SSD) by mAP of 6.4% AP and especially, 7.8% APsmall on the MS-COCO dataset.", "title": "" }, { "docid": "a6bf5e72fed34c7efe1cca6d66e09648", "text": "This paper describes algorithms to automatically derive 3D models of high visual quality from single facade images of arbitrary resolutions. We combine the procedural modeling pipeline of shape grammars with image analysis to derive a meaningful hierarchical facade subdivision. Our system gives rise to three exciting applications: urban reconstruction based on low resolution oblique aerial imagery, reconstruction of facades based on higher resolution ground-based imagery, and the automatic derivation of shape grammar rules from facade images to build a rule base for procedural modeling technology.", "title": "" }, { "docid": "b3db73c0398e6c0e6a90eac45bb5821f", "text": "The task of video grounding, which temporally localizes a natural language description in a video, plays an important role in understanding videos. Existing studies have adopted strategies of sliding window over the entire video or exhaustively ranking all possible clip-sentence pairs in a presegmented video, which inevitably suffer from exhaustively enumerated candidates. To alleviate this problem, we formulate this task as a problem of sequential decision making by learning an agent which regulates the temporal grounding boundaries progressively based on its policy. Specifically, we propose a reinforcement learning based framework improved by multi-task learning and it shows steady performance gains by considering additional supervised boundary information during training. Our proposed framework achieves state-ofthe-art performance on ActivityNet’18 DenseCaption dataset (Krishna et al. 2017) and Charades-STA dataset (Sigurdsson et al. 2016; Gao et al. 2017) while observing only 10 or less clips per video.", "title": "" }, { "docid": "3839daa795aaf81d202141fa3249e28a", "text": "The design and implementation of software for extracting information from GIS files to a format appropriate for use in a spatial modeling software environment is described. This has resulted in publicly available c/c++ language programs for extracting polygons as well as database information from ArcView shape files into the Matlab software environment. In addition, a set of publicly available mapping functions that employ a graphical user interface (GUI) within Matlab are described. Particular attention is given to the interplay between spatial econometric/statistical modeling and the use of GIS information as well as mapping functions. In a recent survey of the interplay between GIS and regional modeling, Goodchild and Haining (2003) indicate the need for a convergence of these two dimensions of spatial modeling in regional science. Many of the design considerations discussed here would also apply to implementing similar functionality in other software environments for spatial statistical modeling such as R/Splus or Gauss. Toolboxes are the name given by the MathWorks to related sets of Matlab functions aimed at solving a particular class of problems. Toolboxes of functions useful in signal processing, optimization, statistics, finance and a host of other areas are available from the MathWorks as add-ons to the standard Matlab software distribution. We label the set of functions described here for extracting GIS file information as well as the GUI mapping functions the Arc Mat Toolbox.", "title": "" }, { "docid": "019375c14bc0377acbf259ef423fa46f", "text": "Original approval signatures are on file with the University of Oregon Graduate School.", "title": "" }, { "docid": "90558e7b7d2a5fbc76fe3d2c824289b0", "text": "This paper deals with a 3 dB Ku-band coupler designed in substrate integrated waveguide (SIW) technology. A microstrip-SIW-transition is designed with a return loss (RL) greater than 20 dB. Rogers 4003 substrate is used for the SIW with a gold plated copper metallisation. The coupler achieves a relative bandwidth of 26.1% with an insertion loss (IL) lower than 2 dB, coupling balance smaller than 0.5 dB and RL and isolation greater than 15 dB.", "title": "" }, { "docid": "737f75e39cbf1b5226985e866a44c106", "text": "A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., Security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., The baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., More than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., For security, was 1% in the previous development process.", "title": "" }, { "docid": "22ecb164fb7a8bf4968dd7f5e018c736", "text": "Unsupervised learning techniques in computer vision of ten require learning latent representations, such as low-dimensional linear and non-linear subspaces. Noise and outliers in the data can frustrate these approaches by obscuring the latent spaces. Our main goal is deeper understanding and new development of robust approaches for representation learning. We provide a new interpretation for existing robust approaches and present two specific contributions: a new robust PCA approach, which can separate foreground features from dynamic background, and a novel robust spectral clustering method, that can cluster facial images with high accuracy. Both contributions show superior performance to standard methods on real-world test sets.", "title": "" } ]
scidocsrr
32980997ad6f37a110ae57463c388881
Quantitative Analysis of the Full Bitcoin Transaction Graph
[ { "docid": "cdefeefa1b94254083eba499f6f502fb", "text": "problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a \"problem\" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that \"solves\" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out \"expensive\" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its \"complexity,\" that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a \"standard\" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by", "title": "" } ]
[ { "docid": "13774d2655f2f0ac575e11991eae0972", "text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.", "title": "" }, { "docid": "64389907530dd26392e037f1ab2d1da5", "text": "Most current license plate (LP) detection and recognition approaches are evaluated on a small and usually unrepresentative dataset since there are no publicly available large diverse datasets. In this paper, we introduce CCPD, a large and comprehensive LP dataset. All images are taken manually by workers of a roadside parking management company and are annotated carefully. To our best knowledge, CCPD is the largest publicly available LP dataset to date with over 250k unique car images, and the only one provides vertices location annotations. With CCPD, we present a novel network model which can predict the bounding box and recognize the corresponding LP number simultaneously with high speed and accuracy. Through comparative experiments, we demonstrate our model outperforms current object detection and recognition approaches in both accuracy and speed. In real-world applications, our model recognizes LP numbers directly from relatively high-resolution images at over 61 fps and 98.5% accuracy.", "title": "" }, { "docid": "7645c6a0089ab537cb3f0f82743ce452", "text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.", "title": "" }, { "docid": "9175794d83b5f110fb9f08dc25a264b8", "text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.", "title": "" }, { "docid": "0c24b767705b3a88acf9fe128c0e3477", "text": "The studied camera is basically just a line of pixel sensors, which can be rotated on a full circle, describing a cylindrical surface this way. During a rotation we take individual shots, line by line. All these line images define a panoramic image on a cylindrical surface. This camera architecture (in contrast to the plane segment of the pinhole camera) comes with new challenges, and this report is about a classification of different models of such cameras and their calibration. Acknowledgment. The authors acknowledge comments, collaboration or support by various students and colleagues at CITR Auckland and DLR Berlin-Adlershof. report1_HWK.tex; 22/03/2006; 9:47; p.1", "title": "" }, { "docid": "c42aaf64a6da2792575793a034820dcb", "text": "Psychologists and psychiatrists commonly rely on self-reports or interviews to diagnose or treat behavioral addictions. The present study introduces a novel source of data: recordings of the actual problem behavior under investigation. A total of N = 58 participants were asked to fill in a questionnaire measuring problematic mobile phone behavior featuring several questions on weekly phone usage. After filling in the questionnaire, all participants received an application to be installed on their smartphones, which recorded their phone usage for five weeks. The analyses revealed that weekly phone usage in hours was overestimated; in contrast, numbers of call and text message related variables were underestimated. Importantly, several associations between actual usage and being addicted to mobile phones could be derived exclusively from the recorded behavior, but not from self-report variables. The study demonstrates the potential benefit to include methods of psychoinformatics in the diagnosis and treatment of problematic mobile phone use.", "title": "" }, { "docid": "f6383e814999744b24e6a1ce6507e47b", "text": "We propose a new approach, CCRBoost, to identify the hierarchical structure of spatio-temporal patterns at different resolution levels and subsequently construct a predictive model based on the identified structure. To accomplish this, we first obtain indicators within different spatio-temporal spaces from the raw data. A distributed spatio-temporal pattern (DSTP) is extracted from a distribution, which consists of the locations with similar indicators from the same time period, generated by multi-clustering. Next, we use a greedy searching and pruning algorithm to combine the DSTPs in order to form an ensemble spatio-temporal pattern (ESTP). An ESTP can represent the spatio-temporal pattern of various regularities or a non-stationary pattern. To consider all the possible scenarios of a real-world ST pattern, we then build a model with layers of weighted ESTPs. By evaluating all the indicators of one location, this model can predict whether a target event will occur at this location. In the case study of predicting crime events, our results indicate that the predictive model can achieve 80 percent accuracy in predicting residential burglary, which is better than other methods.", "title": "" }, { "docid": "cc6cf6557a8be12d8d3a4550163ac0a9", "text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.", "title": "" }, { "docid": "8bbf5cc2424e0365d6968c4c465fe5f7", "text": "We describe a method for assigning English tense and aspect in a system that realizes surface text for symbolically encoded narratives. Our testbed is an encoding interface in which propositions that are attached to a timeline must be realized from several temporal viewpoints. This involves a mapping from a semantic encoding of time to a set of tense/aspect permutations. The encoding tool realizes each permutation to give a readable, precise description of the narrative so that users can check whether they have correctly encoded actions and statives in the formal representation. Our method selects tenses and aspects for individual event intervals as well as subintervals (with multiple reference points), quoted and unquoted speech (which reassign the temporal focus), and modal events such as conditionals.", "title": "" }, { "docid": "e0cf83bcc9830f2a94af4822576e4167", "text": "Multiple kernel learning (MKL) optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels are missing, which is common in practical applications. This paper proposes an absent MKL (AMKL) algorithm to address this issue. Different from existing approaches where missing channels are firstly imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithm directly classifies each sample with its observed channels. In specific, we define a margin for each sample in its own relevant space, which corresponds to the observed channels of that sample. The proposed AMKL algorithm then maximizes the minimum of all sample-based margins, and this leads to a difficult optimization problem. We show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. Extensive experiments are conducted on five MKL benchmark data sets to compare the proposed algorithm with existing imputation-based methods. As observed, our algorithm achieves superior performance and the improvement is more significant with the increasing missing ratio. Disciplines Engineering | Science and Technology Studies Publication Details Liu, X., Wang, L., Yin, J., Dou, Y. & Zhang, J. (2015). Absent multiple kernel learning. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (pp. 2807-2813). United States: IEEE. This conference paper is available at Research Online: http://ro.uow.edu.au/eispapers/5373 Absent Multiple Kernel Learning Xinwang Liu School of Computer National University of Defense Technology Changsha, China, 410073 Lei Wang School of Computer Science and Software Engineering University of Wollongong NSW, Australia, 2522 Jianping Yin, Yong Dou School of Computer National University of Defense Technology Changsha, China, 410073 Jian Zhang Faculty of Engineering and Information Technology University of Technology Sydney NSW, Australia, 2007", "title": "" }, { "docid": "4b049e3fee1adfba2956cb9111a38bd2", "text": "This paper presents an optimization based algorithm for underwater image de-hazing problem. Underwater image de-hazing is the most prominent area in research. Underwater images are corrupted due to absorption and scattering. With the effect of that, underwater images have the limitation of low visibility, low color and poor natural appearance. To avoid the mentioned problems, Enhanced fuzzy intensification method is proposed. For each color channel, enhanced fuzzy membership function is derived. Second, the correction of fuzzy based pixel intensification is carried out for each channel to remove haze and to enhance visibility and color. The post processing of fuzzy histogram equalization is implemented for red channel alone when the captured image is having highest value of red channel pixel values. The proposed method provides better results in terms maximum entropy and PSNR with minimum MSE with very minimum computational time compared to existing methodologies.", "title": "" }, { "docid": "616354e134820867698abd3257606e62", "text": "Supplementary to the description of diseases at symptom level, the International Classification of Functioning, Disability and Health (ICF), edited by the WHO, for the first time enables a systematic description also at the level of disabilities and impairments. The Mini-ICF-Rating for Mental Disorders (Mini-ICF-P) is a short observer rating instrument for the assessment of disabilities, especially with regard to occupational functioning. The Mini-ICF-P was first evaluated empirically in 125 patients of a Department of Behavioural Medicine and Psychosomatics. Parallel-test reliability was r = 0.59. Correlates were found with cognitive and motivational variables and duration of sick leave from work. In summary, the Mini-ICF-P is a quick and practicable instrument.", "title": "" }, { "docid": "03e1ede18dcc78409337faf265940a4d", "text": "Epidermal thickness and its relationship to age, gender, skin type, pigmentation, blood content, smoking habits and body site is important in dermatologic research and was investigated in this study. Biopsies from three different body sites of 71 human volunteers were obtained, and thickness of the stratum corneum and cellular epidermis was measured microscopically using a preparation technique preventing tissue damage. Multiple regressions analysis was used to evaluate the effect of the various factors independently of each other. Mean (SD) thickness of the stratum corneum was 18.3 (4.9) microm at the dorsal aspect of the forearm, 11.0 (2.2) microm at the shoulder and 14.9 (3.4) microm at the buttock. Corresponding values for the cellular epidermis were 56.6 (11.5) microm, 70.3 (13.6) microm and 81.5 (15.7) microm, respectively. Body site largely explains the variation in epidermal thickness, but also a significant individual variation was observed. Thickness of the stratum corneum correlated positively to pigmentation (p = 0.0008) and negatively to the number of years of smoking (p < 0.0001). Thickness of the cellular epidermis correlated positively to blood content (P = 0.028) and was greater in males than in females (P < 0.0001). Epidermal thickness was not correlated to age or skin type.", "title": "" }, { "docid": "c399b42e2c7307a5b3c081e34535033d", "text": "The Internet of Things (IoT) plays an ever-increasing role in enabling smart city applications. An ontology-based semantic approach can help improve interoperability between a variety of IoT-generated as well as complementary data needed to drive these applications. While multiple ontology catalogs exist, using them for IoT and smart city applications require significant amount of work. In this paper, we demonstrate how can ontology catalogs be more effectively used to design and develop smart city applications? We consider four ontology catalogs that are relevant for IoT and smart cities: 1) READY4SmartCities; 2) linked open vocabulary (LOV); 3) OpenSensingCity (OSC); and 4) LOVs for IoT (LOV4IoT). To support semantic interoperability with the reuse of ontology-based smart city applications, we present a methodology to enrich ontology catalogs with those ontologies. Our methodology is generic enough to be applied to any other domains as is demonstrated by its adoption by OSC and LOV4IoT ontology catalogs. Researchers and developers have completed a survey-based evaluation of the LOV4IoT catalog. The usefulness of ontology catalogs ascertained through this evaluation has encouraged their ongoing growth and maintenance. The quality of IoT and smart city ontologies have been evaluated to improve the ontology catalog quality. We also share the lessons learned regarding ontology best practices and provide suggestions for ontology improvements with a set of software tools.", "title": "" }, { "docid": "19e2eaf78ec2723289e162503453b368", "text": "Printing sensors and electronics over flexible substrates are an area of significant interest due to low-cost fabrication and possibility of obtaining multifunctional electronics over large areas. Over the years, a number of printing technologies have been developed to pattern a wide range of electronic materials on diverse substrates. As further expansion of printed technologies is expected in future for sensors and electronics, it is opportune to review the common features, the complementarities, and the challenges associated with various printing technologies. This paper presents a comprehensive review of various printing technologies, commonly used substrates and electronic materials. Various solution/dry printing and contact/noncontact printing technologies have been assessed on the basis of technological, materials, and process-related developments in the field. Critical challenges in various printing techniques and potential research directions have been highlighted. Possibilities of merging various printing methodologies have been explored to extend the lab developed standalone systems to high-speed roll-to-roll production lines for system level integration.", "title": "" }, { "docid": "9a9d4d1d482333734d9b0efe87d1e53e", "text": "Following acute therapeutic interventions, the majority of stroke survivors are left with a poorly functioning hemiparetic hand. Rehabilitation robotics has shown promise in providing patients with intensive therapy leading to functional gains. Because of the hand's crucial role in performing activities of daily living, attention to hand therapy has recently increased. This paper introduces a newly developed Hand Exoskeleton Rehabilitation Robot (HEXORR). This device has been designed to provide full range of motion (ROM) for all of the hand's digits. The thumb actuator allows for variable thumb plane of motion to incorporate different degrees of extension/flexion and abduction/adduction. Compensation algorithms have been developed to improve the exoskeleton's backdrivability by counteracting gravity, stiction and kinetic friction. We have also designed a force assistance mode that provides extension assistance based on each individual's needs. A pilot study was conducted on 9 unimpaired and 5 chronic stroke subjects to investigate the device's ability to allow physiologically accurate hand movements throughout the full ROM. The study also tested the efficacy of the force assistance mode with the goal of increasing stroke subjects' active ROM while still requiring active extension torque on the part of the subject. For 12 of the hand digits'15 joints in neurologically normal subjects, there were no significant ROM differences (P > 0.05) between active movements performed inside and outside of HEXORR. Interjoint coordination was examined in the 1st and 3rd digits, and no differences were found between inside and outside of the device (P > 0.05). Stroke subjects were capable of performing free hand movements inside of the exoskeleton and the force assistance mode was successful in increasing active ROM by 43 ± 5% (P < 0.001) and 24 ± 6% (P = 0.041) for the fingers and thumb, respectively. Our pilot study shows that this device is capable of moving the hand's digits through nearly the entire ROM with physiologically accurate trajectories. Stroke subjects received the device intervention well and device impedance was minimized so that subjects could freely extend and flex their digits inside of HEXORR. Our active force-assisted condition was successful in increasing the subjects' ROM while promoting active participation.", "title": "" }, { "docid": "64d9f6973697749b6e2fa330101cbc77", "text": "Evidence is presented that recognition judgments are based on an assessment of familiarity, as is described by signal detection theory, but that a separate recollection process also contributes to performance. In 3 receiver-operating characteristics (ROC) experiments, the process dissociation procedure was used to examine the contribution of these processes to recognition memory. In Experiments 1 and 2, reducing the length of the study list increased the intercept (d') but decreased the slope of the ROC and increased the probability of recollection but left familiarity relatively unaffected. In Experiment 3, increasing study time increased the intercept but left the slope of the ROC unaffected and increased both recollection and familiarity. In all 3 experiments, judgments based on familiarity produced a symmetrical ROC (slope = 1), but recollection introduced a skew such that the slope of the ROC decreased.", "title": "" }, { "docid": "2950e3c1347c4adeeb2582046cbea4b8", "text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.", "title": "" }, { "docid": "3566e18518d80b2431c4fba34f790a82", "text": "The aim of this paper is to present a nonlinear dynamic model for Voltage Source Converter-based HVDC (VSC-HVDC) links that can be used for dynamic studies. It includes the main physical elements and is controlled by PI controllers with antiwindup. A linear control model is derived for efficient tuning of the controllers of the nonlinear dynamic model. The nonlinear dynamic model is then tuned according to the performance of an ABB HVDC Light model.", "title": "" }, { "docid": "f6227013273d148321cab1eef83c40e5", "text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.", "title": "" } ]
scidocsrr
6d898e36a5a1249ef0cf935ce1d93a00
Identifying Well-formed Natural Language Questions
[ { "docid": "eede682da157ac788a300e9c3080c460", "text": "We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision.", "title": "" }, { "docid": "b83bfb8384d227d67835042d9bdebf82", "text": "We extend and improve upon recent work in structured training for neural network transition-based dependency parsing. We do this by experimenting with novel features, additional transition systems and by testing on a wider array of languages. In particular, we introduce set-valued features to encode the predicted morphological properties and part-ofspeech confusion sets of the words being parsed. We also investigate the use of joint parsing and partof-speech tagging in the neural paradigm. Finally, we conduct a multi-lingual evaluation that demonstrates the robustness of the overall structured neural approach, as well as the benefits of the extensions proposed in this work. Our research further demonstrates the breadth of the applicability of neural network methods to dependency parsing, as well as the ease with which new features can be added to neural parsing models.", "title": "" }, { "docid": "06c0ee8d139afd11aab1cc0883a57a68", "text": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.", "title": "" } ]
[ { "docid": "045a3b0148f8e3ec40d47fcf43ba4823", "text": "We present a compact SHA-256 hardware architecture suitable for the Trusted Mobile Platform (TMP), which requires low-area and low-power characteristics. The built-in hardware engine to compute a hash algorithm in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Unlike personal computers, mobile platform have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore, special architecture and design methods for a compact hash hardware module are required. Our SHA-256 hardware can compute 512-bit data block using 8,588 gates on a 0.25μm CMOS process. The highest operation frequency and throughput of the proposed architecture are 136MHz and 142Mbps, which satisfies processing requirement for the mobile application. 1 Background and Motivation The Trusted Mobile Platform(TMP) [1] guarantees the integrity of the mobile platform and is a preferred requirement in the evolutionary process where the mobile device changes into the open platform and value-based application technology. TMP improves the reliability and security of a device using Trusted Platform Module (TPM), which ensures that the device is running on the authorized software and hardware environment. The TPM is a microcontrollerbased security-engine based on an industry standard specification issued by the Trusted Computing Group (TCG). It protects encryption keys and digital signature keys to maintain data confidentiality and integrity. Especially important, TPM chip is specifically designed to protect a platform and user authentication information from software-based attacks. The built-in hardware engine for hash algorithm in TMP is one of the most important circuit blocks because it is used as a key primitive supporting integrity verification and used in the most of TPM-commands for authentication of the M. Yung, P. Liu, and D. Lin (Eds.): Inscrypt 2008, LNCS 5487, pp. 240–252, 2009. c © Springer-Verlag Berlin Heidelberg 2009 Efficient Hardware Architecture of SHA-256 Algorithm for TMP 241 platform. Current version of TMP [1] specification recommends to use the SHA-1 algorithm. However, recently, National Security Agency (NSA) announced Suite B Cryptography that specify encryption, signature and hash algorithms at the 2005 RSA Conference. According to the Suite B Cryptography, SHA-256 is a common mode for widespread interpretability and appropriate for protecting classified information up to the SECRET level. This cryptographic trend to move over Suite B is realized in the trusted mobile computing. Furthermore, TCG announced that TPM must support SHA-256 algorithm in the revised specification, TPM NEXT [2]. Integrating TCG’s security features into a mobile phone could be a challenge work. In reality, most mobile devices do not require a very high data processing speed. For example, when cellular wireless network technology migrates from 2.5G to 3G, only the data rate is increased from 144kbps to 2Mbps. Also, data rate of Bluetooth, which is one of the wireless Personal Area Network(PAN), is only 10Mbps maximum. However, when security function is added, considerable computing power is demanded to the microprocessor of a handheld device. For example, the processing requirements for AES and SHA at 10Mbps are 206.3 and 115.4 MIPS respectively [3]. In comparison, a state-of-the art handset processors, such as MPC860 and ARM7 are capable of delivering up to 106MIPS and 130MIPS, respectively [4, 5]. The above data indicates a clear disparity between security processing requirements and available processor capabilities, even when assuming that the handset processor is fully dedicated to security processing. In reality, the handset processor also needs to execute the operating system, and application software, which by themselves generate a significant processing workload. Another critical problem of security processing on a mobile platform is power consumption and battery capacity. Unlike personal computers, mobile devices have strict environment in power consumption, in battery life and in available circuit area. Among these limitations, the power consumption is the critical metric to be minimized in the design of cryptographic circuits for mobile platforms. For battery-powered systems, the energy drawn from the battery directly influences the systems battery life, and, consequently, the duration and extent of its mobility, and its overall utility. In general, battery-driven systems operate under stringent constraint especially in limited power. The power limitation gets more serious when the mobile device is subject to the demand of security operations. According to the estimation of [6], the execution of security applications on a battery-powered device can decrease battery life at least by half. Therefore, design methodologies at different abstraction levels, such as systems, architectures, logic design, basic cells, as well as layout, must take into account to design of a low-cost SHA-256 module for trusted mobile platform. In this paper, we introduce an efficient hardware architecture of low-cost SHA-256 algorithm for trusted mobile platforms. As a result, a compact SHA-256 hardware implementation capable of supporting the integrity check and command authentication of trusted mobile platforms could be developed and evaluated so far. The rest of this paper is constructed as follows. Section 2 describes a brief review of SHA-256 algorithm and a short summary of some previous works", "title": "" }, { "docid": "12014f235a197a4fa94e217c50e3433d", "text": "a r t i c l e i n f o Since the early 1990s, South Korea has been expanding its expressways. As of July 2013, a total of 173 expressway service areas (ESAs) have been established. Among these, 31 ESAs were closed due to financial deficits. To address this challenge, this study aimed to develop a decision support system for determining the optimal size of a new ESA, focusing on the profitability of the ESA. This study adopted a case-based reasoning approach as the main research method because it is necessary to provide the historical data as a reference in determining the optimal size of a new ESA, which is more suitable for the decision-making process from the practical perspective. This study used a total of 106 general ESAs to develop the proposed system. Compared to the conventional process (i.e., direction estimation), the prediction accuracy of the improved process (i.e., three-phase estimation process) was improved by 9.84%. The computational time required for the optimization of the proposed system was determined to be less than 10 min (from 1.75 min to 9.93 min). The proposed system could be useful for the final decision-maker as the following purposes: (i) the probability estimation model for determining the optimal size of a new ESA during the planning stage; (ii) the approximate initial construction cost estimation model for a new ESA by using the estimated sales in the ESA; and (iii) the comparative assessment model for evaluating the sales per the building area of the existing ESA.", "title": "" }, { "docid": "37e7ee6d3cc3a999ba7f4bd6dbaa27e7", "text": "Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multi-modal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. This review article provides a factual listing of methods and summarizes the broad scientific challenges faced in the field of medical image fusion. We characterize the medical image fusion research based on (1) the widely used image fusion methods, (2) imaging modalities, and (3) imaging of organs that are under study. This review concludes that even though there exists several open ended technological and scientific challenges, the fusion of medical images has proved to be useful for advancing the clinical reliability of using medical imaging for medical diagnostics and analysis, and is a scientific discipline that has the potential to significantly grow in the coming years.", "title": "" }, { "docid": "124f40ccd178e6284cc66b88da98709d", "text": "The tripeptide glutathione is the thiol compound present in the highest concentration in cells of all organs. Glutathione has many physiological functions including its involvement in the defense against reactive oxygen species. The cells of the human brain consume about 20% of the oxygen utilized by the body but constitute only 2% of the body weight. Consequently, reactive oxygen species which are continuously generated during oxidative metabolism will be generated in high rates within the brain. Therefore, the detoxification of reactive oxygen species is an essential task within the brain and the involvement of the antioxidant glutathione in such processes is very important. The main focus of this review article will be recent results on glutathione metabolism of different brain cell types in culture. The glutathione content of brain cells depends strongly on the availability of precursors for glutathione. Different types of brain cells prefer different extracellular glutathione precursors. Glutathione is involved in the disposal of peroxides by brain cells and in the protection against reactive oxygen species. In coculture astroglial cells protect other neural cell types against the toxicity of various compounds. One mechanism for this interaction is the supply by astroglial cells of glutathione precursors to neighboring cells. Recent results confirm the prominent role of astrocytes in glutathione metabolism and the defense against reactive oxygen species in brain. These results also suggest an involvement of a compromised astroglial glutathione system in the oxidative stress reported for neurological disorders.", "title": "" }, { "docid": "ee947daebb5e560570edb1f3ad553b6e", "text": "We consider the problem of embedding entities and relations of knowledge bases into low-dimensional continuous vector spaces (distributed representations). Unlike most existing approaches, which are primarily efficient for modelling pairwise relations between entities, we attempt to explicitly model both pairwise relations and long-range interactions between entities, by interpreting them as linear operators on the low-dimensional embeddings of the entities. Therefore, in this paper we introduces path ranking to capture the long-range interactions of knowledge graph and at the same time preserve the pairwise relations of knowledge graph; we call it structured embedding via pairwise relation and longrange interactions (referred to as SePLi). Comparing with the-state-of-the-art models, SePLi achieves better performances of embeddings.", "title": "" }, { "docid": "4c1798f0fd65b8d7e60a04a9a3df5201", "text": "This study examined linkages between divorce, depressive/withdrawn parenting, and child adjustment problems at home and school. Middle class divorced single mother families (n = 35) and 2-parent families (n = 174) with a child in the fourth grade participated. Mothers and teachers completed yearly questionnaires and children were interviewed when they were in the fourth, fifth, and sixth grades. Structural equation modeling suggested that the association between divorce and child externalizing and internalizing behavior was partially mediated by depressive/withdrawn parenting when the children were in the fourth and fifth grades.", "title": "" }, { "docid": "99100c269525cea2e4c2d29f12afc5e9", "text": "We do things in the world by exploiting our knowledge of what causes what. But in trying to reason formally about causality, there is a difficulty: to reason with certainty we need complete knowledge of all the relevant events and circumstances, whereas in everyday reasoning tasks we need a more serviceable but looser notion that does not make such demands on our knowledge. In this work the notion of “causal complex” is introduced for a complete set of events and conditions necessary for the causal consequent to occur, and the term “cause” is used for the makeshift, nonmonotonic notion we require for everyday tasks such as planning and language understanding. Like all interesting concepts, neither of these can be defined with necessary and sufficient conditions, but they can be more or less tightly constrained by necessary conditions or sufficient conditions. The issue of how to distinguish between what is in a causal complex from what is outside it is discussed, and within a causal complex, how to distinugish the eventualities that deserve to be called “causes” from those that do not, in particular circumstances. One particular modal, the word “would”, is examined from the standpoint of its underlying causal content, as a linguistic motivation for this enterprise.", "title": "" }, { "docid": "97f748ee5667ee8c2230e07881574c22", "text": "The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.", "title": "" }, { "docid": "accb879062cf9c2e6fa3fb636f33b333", "text": "The CLEF eRisk 2018 challenge focuses on early detection of signs of depression or anorexia using posts or comments over social media. The eRisk lab has organized two tasks this year and released two different corpora for the individual tasks. The corpora are developed using the posts and comments over Reddit, a popular social media. The machine learning group at Ramakrishna Mission Vivekananda Educational and Research Institute (RKMVERI), India has participated in this challenge and individually submitted five results to accomplish the objectives of these two tasks. The paper presents different machine learning techniques and analyze their performance for early risk prediction of anorexia or depression. The techniques involve various classifiers and feature engineering schemes. The simple bag of words model has been used to perform ada boost, random forest, logistic regression and support vector machine classifiers to identify documents related to anorexia or depression in the individual corpora. We have also extracted the terms related to anorexia or depression using metamap, a tool to extract biomedical concepts. Theerefore, the classifiers have been implemented using bag of words features and metamap features individually and subsequently combining these features. The performance of the recurrent neural network is also reported using GloVe and Fasttext word embeddings. Glove and Fasttext are pre-trained word vectors developed using specific corpora e.g., Wikipedia. The experimental analysis on the training set shows that the ada boost classifier using bag of words model outperforms the other methods for task1 and it achieves best score on the test set in terms of precision over all the runs in the challenge. Support vector machine classifier using bag of words model outperforms the other methods in terms of fmeasure for task2. The results on the test set submitted to the challenge suggest that these framework achieve reasonably good performance.", "title": "" }, { "docid": "3df69e5ce63d3a3b51ad6f2b254e12b6", "text": "This paper presents three approaches to creating corpora that we are working on for speech-to-speech translation in the travel conversation task. The first approach is to collect sentences that bilingual travel experts consider useful for people going-to/coming-from another country. The resulting English-Japanese aligned corpora are collectively called the basic travel expression corpus (BTEC), which is now being translated into several other languages. The second approach tries to expand this corpus by generating many \"synonymous\" expressions for each sentence. Although we can create large corpora by the above two approaches relatively cheaply, they may be different from utterances in actual conversation. Thus, as the third approach, we are collecting dialogue corpora by letting two people talk, each in his/her native language, through a speech-to-speech translation system. To concentrate on translation modules, we have replaced speech recognition modules with human typists. We will report some of the characteristics of these corpora as well.", "title": "" }, { "docid": "ab3eee7ef150ea7eb1928dda43447150", "text": "A new type of doubly salient machine is presented in which the field excitation is provided by a nonrotating permanent magnet (PM). This doubly salient PM (DSPM) motor is shown to be kindred to square waveform PM brushless DC motors. Linear and nonlinear analysis has been performed to investigate the characteristics of this new type of PM motors. A prototype DSPM motor has been designed, and a comparison has been made between this new type of motor and the induction motor. It is shown that, by fully exploiting modern high-energy PM material and the doubly salient structure, the DSPM motor can offer superior performance over existing motors in terms of efficiency, torque density, torque-to-current ratio, and torque-to-inertia ratio, while retaining a simple structure amenable to automatic manufacture.<<ETX>>", "title": "" }, { "docid": "5273e9fea51c85651255de7c253066a0", "text": "This paper presents SimpleDS, a simple and publicly available dialogue system trained with deep reinforcement learning. In contrast to previous reinforcement learning dialogue systems, this system avoids manual feature engineering by performing action selection directly from raw text of the last system and (noisy) user responses. Our initial results, in the restaurant domain, report that it is indeed possible to induce reasonable behaviours with such an approach that aims for higher levels of automation in dialogue control for intelligent interactive agents.", "title": "" }, { "docid": "1bc1965682f757dcfa86936911855add", "text": "Software-Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention recently. In SDN, a network controller overlooks and manages the entire network by configuring routing mechanisms for underlying switches. The switches report their status to the controller periodically, such as port statistics and flow statistics, according to their communication protocol. However, switches may contain vulnerabilities that can be exploited by attackers. A compromised switch may not only lose its normal functionality, but it may also maliciously paralyze the network by creating network congestions or packet loss. Therefore, it is important for the system to be able to detect and isolate malicious switches. In this work, we investigate a methodology for an SDN controller to detect compromised switches through real-time analysis of the periodically collected reports. Two types of malicious behavior of compromised switches are investigated: packet dropping and packet swapping. We proposed two anomaly detection algorithms to detect packet droppers and packet swappers. Our simulation results show that our proposed methods can effectively detect packet droppers and swappers. To the best of our knowledge, our work is the first to address malicious switches detection using statistics reports in SDN.", "title": "" }, { "docid": "c52937af593984b680c66fa389111e08", "text": "Symmetries exist in many 3D models while e fficiently finding their symmetry planes is important and usefu l for many related applications. This paper presents a simple and e ffici ntview-basedreflection symmetry detection method based on the viewpoint entropy features of a set of sample views of a 3D model. Before symmetry detection, we align the 3D model based on the Contin uous Principal Component Analysis (CPCA) method. To avoid the hi gh computational load resulting from a directly combinator ial matching among the sample views, we develop a fast symmetry p lane detection method by first generating a candidate symmet ry plane based on a matching pair of sample views and then verify ing whether the number of remaining matching pairs is within a minimum number.Experimental results and two related applications demonst rate better accuracy, e fficiency, robustness and versatilityof our algorithm than state-of-the-art approaches.", "title": "" }, { "docid": "4c596974ba7dde7525e028bd7f168e61", "text": "In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses. Convex OWA aggregation operators range from the max to the mean depending on their weights, and can be used to focus on the top ranked elements as they give more weight to the largest losses. When aggregating hinge losses, the optimization problem is similar to the SVM for interdependent output spaces. Moreover, we show that OWA aggregates of margin-based classification losses have good generalization properties. Experiments on the Letor 3.0 benchmark dataset for information retrieval validate our approach.", "title": "" }, { "docid": "5fc9fe7bcc50aad948ebb32aefdb2689", "text": "This paper explores the use of set expansion (SE) to improve question answering (QA) when the expected answer is a list of entities belonging to a certain class. Given a small set of seeds, SE algorithms mine textual resources to produce an extended list including additional members of the class represented by the seeds. We explore the hypothesis that a noise-resistant SE algorithm can be used to extend candidate answers produced by a QA system and generate a new list of answers that is better than the original list produced by the QA system. We further introduce a hybrid approach which combines the original answers from the QA system with the output from the SE algorithm. Experimental results for several state-of-the-art QA systems show that the hybrid system performs better than the QA systems alone when tested on list question data from past TREC evaluations.", "title": "" }, { "docid": "bfa6e76830bc1dfcbec473f912797e0e", "text": "We present OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy. Integrating OpenFace with inter-frame tracking, we build RTFace, a mechanism for denaturing video streams that selectively blurs faces according to specified policies at full frame rates. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions. Finally, we present a scalable, privacy-aware architecture for large camera networks using RTFace.", "title": "" }, { "docid": "e9306731d9ed290a0469ac329808c6c3", "text": "The biomedical literature grows at a tremendous rate and PubMed comprises already over 15 000 000 abstracts. Finding relevant literature is an important and difficult problem. We introduce GoPubMed, a web server which allows users to explore PubMed search results with the Gene Ontology (GO), a hierarchically structured vocabulary for molecular biology. GoPubMed provides the following benefits: first, it gives an overview of the literature abstracts by categorizing abstracts according to the GO and thus allowing users to quickly navigate through the abstracts by category. Second, it automatically shows general ontology terms related to the original query, which often do not even appear directly in the abstract. Third, it enables users to verify its classification because GO terms are highlighted in the abstracts and as each term is labelled with an accuracy percentage. Fourth, exploring PubMed abstracts with GoPubMed is useful as it shows definitions of GO terms without the need for further look up. GoPubMed is online at www.gopubmed.org. Querying is currently limited to 100 papers per query.", "title": "" }, { "docid": "e964a46706179a92b775307166a64c8a", "text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).", "title": "" }, { "docid": "84f496674fa8c3436f06d4663de3da84", "text": "The growth of E-Banking has led to an ease of access and 24-hour banking facility for one and all. However, this has led to a rise in e-banking fraud which is a growing problem affecting users around the world. As card is becoming the most prevailing mode of payment for online as well as regular purchase, fraud related with it is also increasing. The drastic upsurge of online banking fraud can be seen as an integrative misuse of social, cyber and physical resources [1]. Thus, the proposed system uses cryptography and steganography technology along with various data mining techniques in order to effectively secure the e-banking process and prevent online fraud.", "title": "" } ]
scidocsrr
0177f1a1e9bc08c3edad6fd9a7b3947c
Recurrent Inference Machines for Solving Inverse Problems
[ { "docid": "c1f6052ecf802f1b4b2e9fd515d7ea15", "text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.", "title": "" }, { "docid": "89e6f96c9b1e90749ec730e21cb04004", "text": "Demosaicing is an important first step for color image acquisition. For practical reasons, demosaicing algorithms have to be both efficient and yield high quality results in the presence of noise. The demosaicing problem poses several challenges, e.g. zippering and false color artifacts as well as edge blur. In this work, we introduce a novel learning based method that can overcome these challenges. We formulate demosaicing as an image restoration problem and propose to learn efficient regularization inspired by a variational energy minimization framework that can be trained for different sensor layouts. Our algorithm performs joint demosaicing and denoising in close relation to the real physical mosaicing process on a camera sensor. This is achieved by learning a sequence of energy minimization problems composed of a set of RGB filters and corresponding activation functions. We evaluate our algorithm on the Microsoft Demosaicing data set in terms of peak signal to noise ratio (PSNR) and structured similarity index (SSIM). Our algorithm is highly efficient both in image quality and run time. We achieve an improvement of up to 2.6 dB over recent state-of-the-art algorithms.", "title": "" }, { "docid": "0771cd99e6ad19deb30b5c70b5c98183", "text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.", "title": "" } ]
[ { "docid": "733b391b2b3722b46a2790fe6fb1bf7a", "text": "Physicians often use chest X-rays to quickly and cheaply diagnose disease associated with the area. However, it is much more difficult to make clinical diagnoses with chest X-rays than with other imaging modalities such as CT or MRI. With computer-aided diagnosis, physicians can make chest X-ray diagnoses more quickly and accurately. Pneumonia is often diagnosed with chest X-Rays and kills around 50,000 people each year [1]. With computeraided diagnosis of pneumonia specifically, physicians can more accurately and efficiently diagnose the disease. In this project, we hope to train a model using the dataset described below to help physicians in making diagnoses of pneumonia in chest X-Rays. Our problem is thus a binary classification where the inputs are chest X-ray images and the output is one of two classes: pneumonia or non-pneumonia.", "title": "" }, { "docid": "02df2dde321bb81220abdcff59418c66", "text": "Monitoring aquatic debris is of great interest to the ecosystems, marine life, human health, and water transport. This paper presents the design and implementation of SOAR - a vision-based surveillance robot system that integrates an off-the-shelf Android smartphone and a gliding robotic fish for debris monitoring. SOAR features real-time debris detection and coverage-based rotation scheduling algorithms. The image processing algorithms for debris detection are specifically designed to address the unique challenges in aquatic environments. The rotation scheduling algorithm provides effective coverage of sporadic debris arrivals despite camera's limited angular view. Moreover, SOAR is able to dynamically offload computation-intensive processing tasks to the cloud for battery power conservation. We have implemented a SOAR prototype and conducted extensive experimental evaluation. The results show that SOAR can accurately detect debris in the presence of various environment and system dynamics, and the rotation scheduling algorithm enables SOAR to capture debris arrivals with reduced energy consumption.", "title": "" }, { "docid": "7c5d0139d729ad6f90332a9d1cd28f70", "text": "Cloud based ERP system architecture provides solutions to all the difficulties encountered by conventional ERP systems. It provides flexibility to the existing ERP systems and improves overall efficiency. This paper aimed at comparing the performance traditional ERP systems with cloud base ERP architectures. The challenges before the conventional ERP implementations are analyzed. All the main aspects of an ERP systems are compared with cloud based approach. The distinct advantages of cloud ERP are explained. The difficulties in cloud architecture are also mentioned.", "title": "" }, { "docid": "7ccac1f6b753518495c44a48f4ec324a", "text": "We propose a method to recover the shape of a 3D room from a full-view indoor panorama. Our algorithm can automatically infer a 3D shape from a collection of partially oriented superpixel facets and line segments. The core part of the algorithm is a constraint graph, which includes lines and superpixels as vertices, and encodes their geometric relations as edges. A novel approach is proposed to perform 3D reconstruction based on the constraint graph by solving all the geometric constraints as constrained linear least-squares. The selected constraints used for reconstruction are identified using an occlusion detection method with a Markov random field. Experiments show that our method can recover room shapes that can not be addressed by previous approaches. Our method is also efficient, that is, the inference time for each panorama is less than 1 minute.", "title": "" }, { "docid": "2c704a11e212b90520e92adf85696674", "text": "The authors in this study examined the function and public reception of critical tweeting in online campaigns of four nationalist populist politicians during major national election campaigns. Using a mix of qualitative coding and case study inductive methods, we analyzed the tweets of Narendra Modi, Nigel Farage, Donald Trump, and Geert Wilders before the 2014 Indian general elections, the 2016 UK Brexit referendum, the 2016 US presidential election, and the 2017 Dutch general election, respectively. Our data show that Trump is a consistent outlier in terms of using critical language on Twitter when compared to Wilders, Farage, and Modi, but that all four leaders show significant investment in various forms of antagonistic messaging including personal insults, sarcasm, and labeling, and that these are rewarded online by higher retweet rates. Building on the work of Murray Edelman and his notion of a political spectacle, we examined Twitter as a performative space for critical rhetoric within the frame of nationalist politics. We found that cultural and political differences among the four settings also impact how each politician employs these tactics. Our work proposes that studies of social media spaces need to bring normative questions into traditional notions of collaboration. As we show here, political actors may benefit from in-group coalescence around antagonistic messaging, which while serving as a call to arms for online collaboration for those ideologically aligned, may on a societal level lead to greater polarization.", "title": "" }, { "docid": "8fcc03933f2287eb6e6a6d2730d2c0cd", "text": "While virtualization helps to enable multi-tenancy in data centers, it introduces new challenges to the resource management in traditional OSes. We find that one important design in an OS, prioritizing interactive and I/O-bound workloads, can become ineffective in a virtualized OS. Resource multiplexing between multiple tenants breaks the assumption of continuous CPU availability in physical systems and causes two types of priority inversions in virtualized OSes. In this paper, we present xBalloon, a lightweight approach to preserving I/O prioritization. It uses a balloon process in the virtualized OS to avoid priority inversion in both short-term and long-term scheduling. Experiments in a local Xen environment and Amazon EC2 show that xBalloon improves I/O performance in a recent Linux kernel by as much as 136% on network throughput, 95% on disk throughput, and 125x on network tail latency.", "title": "" }, { "docid": "f6ddb7fd8a4a06d8a0e58b02085b9481", "text": "We explore approximate policy iteration (API), replacing t he usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous techniqu e can solve. In particular, we induce high-quality domain-specific plan ners for classical planning domains (both deterministic and stochastic variants) by solving such domains as extremely large MDPs.", "title": "" }, { "docid": "6fd1d745512130fa62672f5a1ad5e1c2", "text": "Bitcoin, the first peer-to-peer electronic cash system, opened the door to permissionless, private, and trustless transactions. Attempts to repurpose Bitcoin’s underlying blockchain technology have run up against fundamental limitations to privacy, faithful execution, and transaction finality. We introduce Strong Federations: publicly verifiable, Byzantinerobust transaction networks that facilitate movement of any asset between disparate markets, without requiring third-party trust. Strong Federations enable commercial privacy, with support for transactions where asset types and amounts are opaque, while remaining publicly verifiable. As in Bitcoin, execution fidelity is cryptographically enforced; however, Strong Federations significantly lower capital requirements for market participants by reducing transaction latency and improving interoperability. To show how this innovative solution can be applied today, we describe Liquid: the first implementation of Strong Federations deployed in a Financial Market.", "title": "" }, { "docid": "681eb6ee0e4b31772612da151afbcd29", "text": "Due to high directionality and small wavelengths, 60 GHz links are highly vulnerable to human blockage. To overcome blockage, 60 GHz radios can use a phased-array antenna to search for and switch to unblocked beam directions. However, these techniques are reactive, and only trigger after the blockage has occurred, and hence, they take time to recover the link. In this paper, we propose BeamSpy, that can instantaneously predict the quality of 60 GHz beams, even under blockage, without the costly beam searching. BeamSpy captures unique spatial and blockage-invariant correlation among beams through a novel prediction model, exploiting which we can immediately select the best alternative beam direction whenever the current beam’s quality degrades. We apply BeamSpy to a run-time fast beam adaptation protocol, and a blockage-risk assessment scheme that can guide blockage-resilient link deployment. Our experiments on a reconfigurable 60 GHz platform demonstrate the effectiveness of BeamSpy’s prediction framework, and its usefulness in enabling robust 60 GHz links.", "title": "" }, { "docid": "89fed81d7d846086bdd284be422288cc", "text": "A considerable number of organizations continually face difficulties bringing strategy to execution, and suffer from a lack of structure and transparency in corporate strategic management. Yet, enterprise architecture as a fundamental exercise to achieve a structured description of the enterprise and its relationships appears far from being adopted in the strategic management arena. To move the adoption process along, this paper develops a comprehensive business architecture framework that assimilates and extends prior research and applies the framework to selected scenarios in corporate strategic management. This paper also presents the approach in practice, based on a qualitative appraisal of interviews with strategic directors across different industries. With its integrated conceptual guideline for using enterprise architecture to facilitate corporate strategic management and the insights gained from the interviews, this paper not only delves more deeply into the research but also offers advice for both researchers and practitioners.", "title": "" }, { "docid": "1d8b13738cc83d9b892ae716adf28f56", "text": "In the 21 st century, organisations cannot succeed in marketing by focusing only on the marketing mix without a focus on its impact on creating customer loyalty. Customer loyalty is considered to be a key ingredient in enhancing the survival of businesses especially in the situations faced by highly competitive industries. While the antecedents of customer loyalty connected with the marketing mix factors have been well investigated, much still remain regarding some of the intermediate conditions created by the marketing mix factors and customer loyalty. This study sought to investigate the relationship between corporate image and customer loyalty in the mobile telecommunication market in Kenya. The study was guided by several hypotheses that tested the nature of the relationship between four aspects of corporate image and customer loyalty. The study adopted the descriptive survey research design and used a multistage stratified sampling technique to obtain 320 respondents from among students across the campuses of Kenyatta University. Primary data was obtained with questionnaire and analysed using Pearson product-moment correlation coefficient and regression analysis to test the degree of association between the dependent and the independent variables with the aid of the Statistical Product and Service Solutions (SPSS). The new findings of the study showed a positive and statistically significant relationship between the dimensions of corporate image and customer loyalty. The variables significantly predicted customer loyalty. The reported findings in the study raise implications for marketing theory and practice suitable to inform strategic decisions for firms in the telecommunication sector in Kenya.", "title": "" }, { "docid": "2990de2e037498b22fb66b3ddc635d49", "text": "Class imbalance is a problem that is common to many application domains. When examples of one class in a training data set vastly outnumber examples of the other class(es), traditional data mining algorithms tend to create suboptimal classification models. Several techniques have been used to alleviate the problem of class imbalance, including data sampling and boosting. In this paper, we present a new hybrid sampling/boosting algorithm, called RUSBoost, for learning from skewed training data. This algorithm provides a simpler and faster alternative to SMOTEBoost, which is another algorithm that combines boosting and data sampling. This paper evaluates the performances of RUSBoost and SMOTEBoost, as well as their individual components (random undersampling, synthetic minority oversampling technique, and AdaBoost). We conduct experiments using 15 data sets from various application domains, four base learners, and four evaluation metrics. RUSBoost and SMOTEBoost both outperform the other procedures, and RUSBoost performs comparably to (and often better than) SMOTEBoost while being a simpler and faster technique. Given these experimental results, we highly recommend RUSBoost as an attractive alternative for improving the classification performance of learners built using imbalanced data.", "title": "" }, { "docid": "20d754528009ebce458eaa748312b2fe", "text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.", "title": "" }, { "docid": "12a584349b6b25c131b55038deb0d920", "text": "We address in this paper the problem of modifying both profits and costs of a fractional knapsack problem optimally such that a prespectified solution becomes an optimal solution with prespect to new parameters. This problem is called the inverse fractional knapsack problem. Concerning the l1-norm, we first prove that the problem is NP -hard. The problem can be however solved in quadratic time if we only modify profit parameters. Additionally, we develop a quadratic-time algorithm that solves the inverse fractional knapsack problem under l∞-norm.", "title": "" }, { "docid": "739db4358ac89d375da0ed005f4699ad", "text": "All doctors have encountered patients whose symptoms they cannot explain. These individuals frequently provoke despair and disillusionment. Many doctors make a link between inexplicable physical symptoms and assumed psychiatric ill­ ness. An array of adjectives in medicine apply to symptoms without established organic basis – ‘supratentorial’, ‘psychosomatic’, ‘functional’ – and these are sometimes used without reference to their real meaning. In psychiatry, such symptoms fall under the umbrella of the somatoform disorders, which includes a broad range of diagnoses. Conversion disorder is just one of these. Its meaning is not always well understood and it is often confused with somatisation disorder.† Our aim here is to clarify the notion of a conversion disorder (and the differences between conversion and other somatoform disorders) and to discuss prevalence, aetiology, management and prognosis.", "title": "" }, { "docid": "1c2043ac65c6d8a47bffb7dcbab42c54", "text": "In the past three years, Emotion Recognition in the Wild (EmotiW) Grand Challenge has drawn more and more attention due to its huge potential applications. In the fourth challenge, aimed at the task of video based emotion recognition, we propose a multi-clue emotion fusion (MCEF) framework by modeling human emotion from three mutually complementary sources, facial appearance texture, facial action, and audio. To extract high-level emotion features from sequential face images, we employ a CNN-RNN architecture, where face image from each frame is first fed into the fine-tuned VGG-Face network to extract face feature, and then the features of all frames are sequentially traversed in a bidirectional RNN so as to capture dynamic changes of facial textures. To attain more accurate facial actions, a facial landmark trajectory model is proposed to explicitly learn emotion variations of facial components. Further, audio signals are also modeled in a CNN framework by extracting low-level energy features from segmented audio clips and then stacking them as an image-like map. Finally, we fuse the results generated from three clues to boost the performance of emotion recognition. Our proposed MCEF achieves an overall accuracy of 56.66% with a large improvement of 16.19% with respect to the baseline.", "title": "" }, { "docid": "f306da3efd4770a4f912c6a3e1d1ab58", "text": "Objective: disease processes are often marked by both neural and muscular changes that alter movement control and execution, but these adaptations are difficult to tease apart because they occur simultaneously. This is addressed by swapping an individual's limb dynamics with a neurally controlled facsimile using an interactive musculoskeletal simulator (IMS) that allows controlled modifications of musculoskeletal dynamics. This paper details the design and operation of the IMS, quantifies and describes human adaptation to the IMS, and determines whether the IMS allows users to move naturally, a prerequisite for manipulation experiments. Methods: healthy volunteers (n = 4) practiced a swift goal-directed task (back-and-forth elbow flexion/extension) for 90 trials with the IMS off (normal dynamics) and 240 trials with the IMS on, i.e., the actions of a user's personalized electromyography-driven musculoskeletal model are robotically imposed back onto the user. Results: after practicing with the IMS on, subjects could complete the task with end-point errors of 1.56°, close to the speed-matched IMS-off error of 0.57°. Muscle activity, joint torque, and arm kinematics for IMS-on and -off conditions were well matched for three subjects (root-mean-squared error [RMSE] = 0.16 N·m), but the error was higher for one subject with a small stature (RMSE = 0.25 N·m). Conclusion: a well-matched musculoskeletal model allowed IMS users to perform a goal-directed task nearly as well as when the IMS was not active. Significance: this advancement permits real-time manipulations of musculoskeletal dynamics, which could increase our understanding of muscular and neural co-adaptations to injury, disease, disuse, and aging.", "title": "" }, { "docid": "05540e05370b632f8b8cd165ae7d1d29", "text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.", "title": "" }, { "docid": "192d561b6ef1173a3c52f55534697aa1", "text": "This paper describes a mixed-integer linear programming optimization method for the coupled problems of airport taxiway routing and runway scheduling. The receding-horizon formulation and the use of iteration in the avoidance constraints allows the scalability of the baseline algorithm presented, with examples based on Heathrow Airport, London, U.K., which contains up to 240 aircraft. The results show that average taxi times can be reduced by half, compared with the first-come-first-served approach. The main advantage is shown with the departure aircraft flow. Comparative testing demonstrates that iteration reduces the computational demand of the required separation constraints while introducing no loss in performance.", "title": "" } ]
scidocsrr
0ddad1e88882cc7b10135b89db8b3d78
Out of the Box: Reasoning with Graph Convolution Nets for Factual Visual Question Answering
[ { "docid": "673bf6ecf9ae6fb61f7b01ff284c0a5f", "text": "We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual question answering.", "title": "" } ]
[ { "docid": "704598402da135b6b7e3251de4c6edf8", "text": "Almost every complex software system today is configurable. While configurability has many benefits, it challenges performance prediction, optimization, and debugging. Often, the influences of individual configuration options on performance are unknown. Worse, configuration options may interact, giving rise to a configuration space of possibly exponential size. Addressing this challenge, we propose an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions. Our approach combines machine-learning and sampling heuristics in a novel way. It improves over standard techniques in that it (1) represents influences of options and their interactions explicitly (which eases debugging), (2) smoothly integrates binary and numeric configuration options for the first time, (3) incorporates domain knowledge, if available (which eases learning and increases accuracy), (4) considers complex constraints among options, and (5) systematically reduces the solution space to a tractable size. A series of experiments demonstrates the feasibility of our approach in terms of the accuracy of the models learned as well as the accuracy of the performance predictions one can make with them.", "title": "" }, { "docid": "eb42c7dafed682a0643b46f49d2a86ec", "text": "OBJECTIVE\nTo evaluate the effectiveness of telephone based peer support in the prevention of postnatal depression.\n\n\nDESIGN\nMultisite randomised controlled trial.\n\n\nSETTING\nSeven health regions across Ontario, Canada.\n\n\nPARTICIPANTS\n701 women in the first two weeks postpartum identified as high risk for postnatal depression with the Edinburgh postnatal depression scale and randomised with an internet based randomisation service.\n\n\nINTERVENTION\nProactive individualised telephone based peer (mother to mother) support, initiated within 48-72 hours of randomisation, provided by a volunteer recruited from the community who had previously experienced and recovered from self reported postnatal depression and attended a four hour training session.\n\n\nMAIN OUTCOME MEASURES\nEdinburgh postnatal depression scale, structured clinical interview-depression, state-trait anxiety inventory, UCLA loneliness scale, and use of health services.\n\n\nRESULTS\nAfter web based screening of 21 470 women, 701 (72%) eligible mothers were recruited. A blinded research nurse followed up more than 85% by telephone, including 613 at 12 weeks and 600 at 24 weeks postpartum. At 12 weeks, 14% (40/297) of women in the intervention group and 25% (78/315) in the control group had an Edinburgh postnatal depression scale score >12 (chi(2)=12.5, P<0.001; number need to treat 8.8, 95% confidence interval 5.9 to 19.6; relative risk reduction 0.46, 95% confidence interval 0.24 to 0.62). There was a positive trend in favour of the intervention group for maternal anxiety but not loneliness or use of health services. For ethical reasons, participants identified with clinical depression at 12 weeks were referred for treatment, resulting in no differences between groups at 24 weeks. Of the 221 women in the intervention group who received and evaluated their experience of peer support, over 80% were satisfied and would recommend this support to a friend.\n\n\nCONCLUSION\nTelephone based peer support can be effective in preventing postnatal depression among women at high risk.\n\n\nTRIAL REGISTRATION\nISRCTN 68337727.", "title": "" }, { "docid": "d08529ef66abefda062a414acb278641", "text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this inductive logic programming techniques and applications tends to be the representative book in this website.", "title": "" }, { "docid": "874876e2ed9e4a2ba044cf62d408da55", "text": "It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution.\n The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring's true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together.", "title": "" }, { "docid": "1a58f72cd0f6e979a72dbc233e8c4d4a", "text": "The revolution of genome sequencing is continuing after the successful second-generation sequencing (SGS) technology. The third-generation sequencing (TGS) technology, led by Pacific Biosciences (PacBio), is progressing rapidly, moving from a technology once only capable of providing data for small genome analysis, or for performing targeted screening, to one that promises high quality de novo assembly and structural variation detection for human-sized genomes. In 2014, the MinION, the first commercial sequencer using nanopore technology, was released by Oxford Nanopore Technologies (ONT). MinION identifies DNA bases by measuring the changes in electrical conductivity generated as DNA strands pass through a biological pore. Its portability, affordability, and speed in data production makes it suitable for real-time applications, the release of the long read sequencer MinION has thus generated much excitement and interest in the genomics community. While de novo genome assemblies can be cheaply produced from SGS data, assembly continuity is often relatively poor, due to the limited ability of short reads to handle long repeats. Assembly quality can be greatly improved by using TGS long reads, since repetitive regions can be easily expanded into using longer sequencing lengths, despite having higher error rates at the base level. The potential of nanopore sequencing has been demonstrated by various studies in genome surveillance at locations where rapid and reliable sequencing is needed, but where resources are limited.", "title": "" }, { "docid": "44e3ca0f64566978c3e0d0baeaa93543", "text": "Many applications of fast Fourier transforms (FFT’s), such as computer tomography, geophysical signal processing, high-resolution imaging radars, and prediction filters, require high-precision output. An error analysis reveals that the usual method of fixed-point computation of FFT’s of vectors of length2 leads to an average loss of/2 bits of precision. This phenomenon, often referred to as computational noise, causes major problems for arithmetic units with limited precision which are often used for real-time applications. Several researchers have noted that calculation of FFT’s with algebraic integers avoids computational noise entirely, see, e.g., [1]. We will combine a new algorithm for approximating complex numbers by cyclotomic integers with Chinese remaindering strategies to give an efficient algorithm to compute -bit precision FFT’s of length . More precisely, we will approximate complex numbers by cyclotomic integers in [ 2 2 ] whose coefficients, when expressed as polynomials in 2 2 , are bounded in absolute value by some integer . For fixed our algorithm runs in time (log( )), and produces an approximation with worst case error of (1 2 ). We will prove that this algorithm has optimal worst case error by proving a corresponding lower bound on the worst case error of any approximation algorithm for this task. The main tool for designing the algorithms is the use of the cyclotomic units, a subgroup of finite index in the unit group of the cyclotomic field. First implementations of our algorithms indicate that they are fast enough to be used for the design of low-cost high-speed/highprecision FFT chips.", "title": "" }, { "docid": "a2617ce3b0d618a5e4b61033345d59b7", "text": "Asymmetry of the eyelid crease is a major complication following double eyelid blepharoplasty; the reasons are multivariate. This study presents, for the first time, a novel method, based on high-definition magnetic resonance imaging and high-precision weighing of tissue, for quantitating preoperative asymmetry of eyelid thickness in young Chinese women presenting for blepharoplasty. From 1 January 2008 to 1 October 2011, we studied 1217 women requesting double eyelid blepharoplasty. The patients ranged in age from 17 to 24 years (average 21.13 years). All patients were of Chinese Han nationality. Soft-tissue thickness at the tarsal plate superior border was 5.05 ± 1.01 units on the right side and 4.12 ± 0.96 units on the left. The submuscular fibro-adipose tissue area was 95.12 ± 23.27 unit(2) on the right side and 76.05 ± 21.11 unit(2) on the left. The pre-aponeurotic fat pad area was 112.33 ± 29.16 unit(2) on the right side and 91.25 ± 27.32 unit(2) on the left. The orbicularis muscle resected weighed 0.185 ± 0.055 g on the right side and 0.153 ± 0.042 g on the left; the orbital fat resected weighed 0.171 ± 0.062 g on the right side and 0.106 ± 0.057 g on the left. In conclusion, upper eyelid thickness asymmetry is a common phenomenon in young Chinese women who wish to undertake double eyelid blepharoplasty. We have demonstrated that the orbicularis muscle and orbital fat pad are consistently thicker on the right side than on the left.", "title": "" }, { "docid": "1bbd0eca854737c94e62442ee4cedac8", "text": "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40% vs the best previous precision of 74.00% for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14% accuracy on CUB-2011.", "title": "" }, { "docid": "ecfd9b38cc68c4af9addb4915424d6d0", "text": "The conditions for antenna diversity action are investigated. In terms of the fields, a condition is shown to be that the incident field and the far field of the diversity antenna should obey (or nearly obey) an orthogonality relationship. The role of mutual coupling is central, and it is different from that in a conventional array antenna. In terms of antenna parameters, a sufficient condition for diversity action for a certain class of high gain antennas at the mobile, which approximates most practical mobile antennas, is shown to be zero (or low) mutual resistance between elements. This is not the case at the base station, where the condition is necessary only. The mutual resistance condition offers a powerful design tool, and examples of new mobile diversity antennas are discussed along with some existing designs.", "title": "" }, { "docid": "49e91d22adb0cdeb014b8330e31f226d", "text": "Ghrelin increases non-REM sleep and decreases REM sleep in young men but does not affect sleep in young women. In both sexes, ghrelin stimulates the activity of the somatotropic and the hypothalamic-pituitary-adrenal (HPA) axis, as indicated by increased growth hormone (GH) and cortisol plasma levels. These two endocrine axes are crucially involved in sleep regulation. As various endocrine effects are age-dependent, aim was to study ghrelin's effect on sleep and secretion of GH and cortisol in elderly humans. Sleep-EEGs (2300-0700 h) and secretion profiles of GH and cortisol (2000-0700 h) were determined in 10 elderly men (64.0+/-2.2 years) and 10 elderly, postmenopausal women (63.0+/-2.9 years) twice, receiving 50 microg ghrelin or placebo at 2200, 2300, 0000, and 0100 h, in this single-blind, randomized, cross-over study. In men, ghrelin compared to placebo was associated with significantly more stage 2 sleep (placebo: 183.3+/-6.1; ghrelin: 221.0+/-12.2 min), slow wave sleep (placebo: 33.4+/-5.1; ghrelin: 44.3+/-7.7 min) and non-REM sleep (placebo: 272.6+/-12.8; ghrelin: 318.2+/-11.0 min). Stage 1 sleep (placebo: 56.9+/-8.7; ghrelin: 50.9+/-7.6 min) and REM sleep (placebo: 71.9+/-9.1; ghrelin: 52.5+/-5.9 min) were significantly reduced. Furthermore, delta power in men was significantly higher and alpha power and beta power were significantly lower after ghrelin than after placebo injection during the first half of night. In women, no effects on sleep were observed. In both sexes, ghrelin caused comparable increases and secretion patterns of GH and cortisol. In conclusion, ghrelin affects sleep in elderly men but not women resembling findings in young subjects.", "title": "" }, { "docid": "8b581e9ae50ed1f1aa1077f741fa4504", "text": "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "title": "" }, { "docid": "6a80eb8001380f4d63a8cf3f3693f73c", "text": "Traditional energy measurement fails to provide support to consumers to make intelligent decisions to save energy. Non-intrusive load monitoring is one solution that provides disaggregated power consumption profiles. Machine learning approaches rely on public datasets to train parameters for their algorithms, most of which only provide low-frequency appliance-level measurements, thus limiting the available feature space for recognition.\n In this paper, we propose a low-cost measurement system for high-frequency energy data. Our work utilizes an off-the-shelf power strip with a voltage-sensing circuit, current sensors, and a single-board PC as data aggregator. We develop a new architecture and evaluate the system in real-world environments. The self-contained unit for six monitored outlets can achieve up to 50 kHz for all signals simultaneously. A simple design and off-the-shelf components allow us to keep costs low. Equipping a building with our measurement systems is more feasible compared to expensive existing solutions. We used the outlined system architecture to manufacture 20 measurement systems to collect energy data over several months of more than 50 appliances at different locations, with an aggregated size of 15 TB.", "title": "" }, { "docid": "c998c8d5cc17ba668492d813d522a17d", "text": "This paper presents a 3D face reconstruction method based on multi-view stereo algorithm, the proposed algorithm reconstructs 3D face model from videos captured around static human faces. Image sequence is processed as the input of shape from motion algorithm to estimate camera parameters and camera positions, 3D points with different denseness degree could be acquired by using a method named patch based multi-view stereopsis, finally, the proposed method uses a surface reconstruction algorithm to generate a watertight 3D face model. The proposed approach can automatically detect facial feature points; it does not need any initialization and special equipments; videos can be obtained with commonly used picture pick-up device such as mobile phones. Several groups of experiments have been conducted to validate the availability of the proposed method.", "title": "" }, { "docid": "70b62dfeab05d3bc4c64199a5cea3b1a", "text": "Sleep timing undergoes profound changes during adolescence, often resulting in inadequate sleep duration. The present study examines the relationship of sleep duration with positive attitude toward life and academic achievement in a sample of 2716 adolescents in Switzerland (mean age: 15.4 years, SD = 0.8), and whether this relationship is mediated by increased daytime tiredness and lower self-discipline/behavioral persistence. Further, we address the question whether adolescents who start school modestly later (20 min; n = 343) receive more sleep and report better functioning. Sleeping less than an average of 8 h per night was related to more tiredness, inferior behavioral persistence, less positive attitude toward life, and lower school grades, as compared to longer sleep duration. Daytime tiredness and behavioral persistence mediated the relationship between short sleep duration and positive attitude toward life and school grades. Students who started school 20 min later received reliably more sleep and reported less tiredness.", "title": "" }, { "docid": "2fbd1b2e25473affb40990195b26a88b", "text": "In this paper we considerably improve on a state-of-the-art alpha matting approach by incorporating a new prior which is based on the image formation process. In particular, we model the prior probability of an alpha matte as the convolution of a high-resolution binary segmentation with the spatially varying point spread function (PSF) of the camera. Our main contribution is a new and efficient de-convolution approach that recovers the prior model, given an approximate alpha matte. By assuming that the PSF is a kernel with a single peak, we are able to recover the binary segmentation with an MRF-based approach, which exploits flux and a new way of enforcing connectivity. The spatially varying PSF is obtained via a partitioning of the image into regions of similar defocus. Incorporating our new prior model into a state-of-the-art matting technique produces results that outperform all competitors, which we confirm using a publicly available benchmark.", "title": "" }, { "docid": "e7790dcba1b3982f8cf46ae7dc78fc11", "text": "This paper introduces a new approach for expansion of queries with geographical context. The proposed strategy is based on a query parser that captures geonames and spatial relationships, and maps geographical features and feature types into concepts of a geographical ontology. Different strategies for query expansion, according to the geographical restrictions given by the user, are compared. The proposed method allows a more versatile and focused expansion towards the geographical information need of the user.", "title": "" }, { "docid": "e2649203ae3e8648c8ec1eafb7a19d6e", "text": "This paper describes an algorithm to extract adaptive and quality quadrilateral/hexahedral meshes directly from volumetric data. First, a bottom-up surface topology preserving octree-based algorithm is applied to select a starting octree level. Then the dual contouring method is used to extract a preliminary uniform quad/hex mesh, which is decomposed into finer quads/hexes adaptively without introducing any hanging nodes. The positions of all boundary vertices are recalculated to approximate the boundary surface more accurately. Mesh adaptivity can be controlled by a feature sensitive error function, the regions that users are interested in, or finite element calculation results. Finally, a relaxation based technique is deployed to improve mesh quality. Several demonstration examples are provided from a wide variety of application domains. Some extracted meshes have been extensively used in finite element simulations.", "title": "" }, { "docid": "cfe1b91f879ab59b3afcfe2bf64c911e", "text": "We consider a variant of the classical three-peg Tower of Hanoi problem, where limitations on the possible moves among the pegs are imposed. Each variant corresponds to a di-graph whose vertices are the pegs, and an edge from one vertex to another designates the ability of moving a disk from the first peg to the other, provided that the rules concerning the disk sizes are obeyed. There are five non-isomorphic graphs on three vertices, which are strongly connected—a sufficient condition for the existence of a solution to the problem. We provide optimal algorithms for the problem for all these graphs, and find the number of moves each requires.", "title": "" }, { "docid": "86f1eb528e5d062a4a8d7c2d03ae4016", "text": "Recent advances in representation learning on graphs, mainly leveraging graph convolutional networks, have brought a substantial improvement on many graphbased benchmark tasks. While novel approaches to learning node embeddings are highly suitable for node classification and link prediction, their application to graph classification (predicting a single label for the entire graph) remains mostly rudimentary, typically using a single global pooling step to aggregate node features or a hand-designed, fixed heuristic for hierarchical coarsening of the graph structure. An important step towards ameliorating this is differentiable graph coarsening—the ability to reduce the size of the graph in an adaptive, datadependent manner within a graph neural network pipeline, analogous to image downsampling within CNNs. However, the previous prominent approach to pooling has quadratic memory requirements during training and is therefore not scalable to large graphs. Here we combine several recent advances in graph neural network design to demonstrate that competitive hierarchical graph classification results are possible without sacrificing sparsity. Our results are verified on several established graph classification benchmarks, and highlight an important direction for future research in graph-based neural networks.", "title": "" }, { "docid": "ed9d72566cdf3e353bf4b1e589bf85eb", "text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.", "title": "" } ]
scidocsrr
77ce4e914cd4cf346f7bdf5009c5d540
Elderly activities recognition and classification for applications in assisted living
[ { "docid": "aeabcc9117801db562d83709fda22722", "text": "The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project). © 2003 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "e766cd377c223cb3d90272e8c40a54af", "text": "This paper aims at describing the state of the art on quadratic assignment problems (QAPs). It discusses the most important developments in all aspects of the QAP such as linearizations, QAP polyhedra, algorithms to solve the problem to optimality, heuristics, polynomially solvable special cases, and asymptotic behavior. Moreover, it also considers problems related to the QAP, e.g. the biquadratic assignment problem, and discusses the relationship between the QAP and other well known combinatorial optimization problems, e.g. the traveling salesman problem, the graph partitioning problem, etc. The paper will appear in the Handbook of Combinatorial Optimization to be published by Kluwer Academic Publishers, P. Pardalos and D.-Z. Du, eds.", "title": "" }, { "docid": "4d7d99532c59415cff1a12f2b935921e", "text": "Many applications in computer graphics and virtual environments need to render datasets with large numbers of primitives and high depth complexity at interactive rates. However, standard techniques like view frustum culling and a hardware z-bu er are unable to display datasets composed of hundred of thousands of polygons at interactive frame rates on current high-end graphics systems. We add a \\conservative\"visibility culling stage to the rendering pipeline, attempting to identify and avoid processing of occluded polygons. Given a moving viewpoint, the algorithm dynamically chooses a set of occluders. Each occluder is used to compute a shadow frustum, and all primitives contained within this frustumare culled. The algorithmhierarchicallytraverses the model, culling out parts not visible from the current viewpoint using e cient, robust, and in some cases specialized interference detection algorithms. The algorithm's performance varies with the location of the viewpoint and the depth complexity of the model. In the worst case it is linear in the input size with a small constant. In this paper, we demonstrate its performance on a city model composed of 500;000 polygons and possessing varying depth complexity. We are able to cull an average of 55% of the polygons that would not be culled by view-frustum culling and obtain a commensurate improvement in frame rate. The overall approach is e ective and scalable, is applicable to all polygonal models, and can be easily implemented on top of view-frustum culling.", "title": "" }, { "docid": "7232868b492b19f6ef5e4cf1de7b6ed7", "text": "Cognitive linguistics is one of the fastest growing and influential perspectives on the nature of language, the mind, and their relationship with sociophysical (embodied) experience. It is a broad theoretical and methodological enterprise, rather than a single, closely articulated theory. Its primary commitments are outlined. These are the Cognitive Commitment-a commitment to providing a characterization of language that accords with what is known about the mind and brain from other disciplines-and the Generalization Commitment-which represents a dedication to characterizing general principles that apply to all aspects of human language. The article also outlines the assumptions and worldview which arises from these commitments, as represented in the work of leading cognitive linguists. WIREs Cogn Sci 2012, 3:129-141. doi: 10.1002/wcs.1163 For further resources related to this article, please visit the WIREs website.", "title": "" }, { "docid": "87696c01f32e83a2237b83c833cc94b7", "text": "Image tagging is an essential step for developing Automatic Image Annotation (AIA) methods that are based on the learning by example paradigm. However, manual image annotation, even for creating training sets for machine learning algorithms, requires hard effort and contains human judgment errors and subjectivity. Thus, alternative ways for automatically creating training examples, i.e., pairs of images and tags, are pursued. In this work, we investigate whether tags accompanying photos in the Instagram can be considered as image annotation metadata. If such a claim is proved then Instagram could be used as a very rich, easy to collect automatically, source of training data for the development of AIA techniques. Our hypothesis is that Instagram hashtags, and especially those provided by the photo owner/creator, express more accurately the content of a photo compared to the tags assigned to a photo during explicit image annotation processes like crowdsourcing. In this context, we explore the descriptive power of hashtags by examining whether other users would use the same, with the owner, hashtags to annotate an image. For this purpose 1000 Instagram images were collected and one to four hashtags, considered as the most descriptive ones for the image in question, were chosen among the hashtags used by the photo owner. An online database was constructed to generate online questionnaires containing 20 images each, which were distributed to experiment participants so they can choose the best suitable hashtag for every image according to their interpretation. Results show that an average of 66% of the participants hashtag choices coincide with those suggested by the photo owners; thus, an initial evidence towards our hypothesis confirmation can be claimed. c ⃝ 2016 Qassim University. Production and Hosting by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer review under responsibility of Qassim University. ∗ Corresponding author. E-mail addresses: s.giannoulakis@cut.ac.cy (S. Giannoulakis), nic http://dx.doi.org/10.1016/j.jides.2016.10.001 2352-6645/ c ⃝ 2016 Qassim University. Production and Hosting by E license (http://creativecommons.org/licenses/by-nc-nd/4.0/). olas.tsapatsoulis@cut.ac.cy (N. Tsapatsoulis). lsevier B.V. This is an open access article under the CC BY-NC-ND J O U R N A L O F I N N O VA T I O N I N D I G I T A L E C O S Y S T E M S 3 ( 2 0 1 6 ) 1 1 4 – 1 2 9 115", "title": "" }, { "docid": "cd9e90ba83156a2c092d68022c4227c9", "text": "The difficulty of integer factorization is fundamental to modern cryptographic security using RSA encryption and signatures. Although a 512-bit RSA modulus was first factored in 1999, 512-bit RSA remains surprisingly common in practice across many cryptographic protocols. Popular understanding of the difficulty of 512-bit factorization does not seem to have kept pace with developments in computing power. In this paper, we optimize the CADO-NFS and Msieve implementations of the number field sieve for use on the Amazon Elastic Compute Cloud platform, allowing a non-expert to factor 512-bit RSA public keys in under four hours for $75. We go on to survey the RSA key sizes used in popular protocols, finding hundreds or thousands of deployed 512-bit RSA keys in DNSSEC, HTTPS, IMAP, POP3, SMTP, DKIM, SSH, and PGP.", "title": "" }, { "docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1", "text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.", "title": "" }, { "docid": "dba24c6bf3e04fc6d8b99a64b66cb464", "text": "Recommender systems have to serve in online environments which can be highly non-stationary.1. Traditional recommender algorithmsmay periodically rebuild their models, but they cannot adjust to quick changes in trends caused by timely information. In our experiments, we observe that even a simple, but online trained recommender model can perform significantly better than its batch version. We investigate online learning based recommender algorithms that can efficiently handle non-stationary data sets. We evaluate our models over seven publicly available data sets. Our experiments are available as an open source project2.", "title": "" }, { "docid": "25c92d054b39fe4951606c832edf99c0", "text": "The increasing use of machine learning algorithms, such as Convolutional Neural Networks (CNNs), makes the hardware accelerator approach very compelling. However the question of how to best design an accelerator for a given CNN has not been answered yet, even on a very fundamental level. This paper addresses that challenge, by providing a novel framework that can universally and accurately evaluate and explore various architectural choices for CNN accelerators on FPGAs. Our exploration framework is more extensive than that of any previous work in terms of the design space, and takes into account various FPGA resources to maximize performance including DSP resources, on-chip memory, and off-chip memory bandwidth. Our experimental results using some of the largest CNN models including one that has 16 convolutional layers demonstrate the efficacy of our framework, as well as the need for such a high-level architecture exploration approach to find the best architecture for a CNN model.", "title": "" }, { "docid": "cf248f6d767072a4569e31e49918dea1", "text": "We describe resources aimed at increasing the usability of the semantic representations utilized within the DELPH-IN (Deep Linguistic Processing with HPSG) consortium. We concentrate in particular on the Dependency Minimal Recursion Semantics (DMRS) formalism, a graph-based representation designed for compositional semantic representation with deep grammars. Our main focus is on English, and specifically English Resource Semantics (ERS) as used in the English Resource Grammar. We first give an introduction to ERS and DMRS and a brief overview of some existing resources and then describe in detail a new repository which has been developed to simplify the use of ERS/DMRS. We explain a number of operations on DMRS graphs which our repository supports, with sketches of the algorithms, and illustrate how these operations can be exploited in application building. We believe that this work will aid researchers to exploit the rich and effective but complex DELPH-IN resources.", "title": "" }, { "docid": "c281538d7aa7bd8727ce4718de82c7c8", "text": "More than 15 years after model predictive control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models. Much progress has been made on these issues for non-linear systems but for practical applications many questions remain, including the reliability and efficiency of the on-line computation scheme. To deal with model uncertainty ‘rigorously’ an involved dynamic programming problem must be solved. The approximation techniques proposed for this purpose are largely at a conceptual stage. Among the broader research needs the following areas are identified: multivariable system identification, performance monitoring and diagnostics, non-linear state estimation, and batch system control. Many practical problems like control objective prioritization and symptom-aided diagnosis can be integrated systematically and effectively into the MPC framework by expanding the problem formulation to include integer variables yielding a mixed-integer quadratic or linear program. Efficient techniques for solving these problems are becoming available. © 1999 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "fa44652ecd36d99d18535966727fb3d4", "text": "Spatio-temporal cuboid pyramid (STCP) for action recognition using depth motion sequences [1] is influenced by depth camera error which leads the depth motion sequence (DMS) existing many kinds of noise, especially on the surface. It means that the dimension of DMS is awfully high and the feature for action recognition becomes less apparent. In this paper, we present an effective method to reduce noise, which is to segment foreground. We firstly segment and extract human contour in the color image using convolutional network model. Then, human contour is re-segmented utilizing depth information. Thirdly we project each frame of the segmented depth sequence onto three views. We finally extract features from cuboids and recognize human actions. The proposed approach is evaluated on three public benchmark datasets, i.e., UTKinect-Action Dataset, MSRActionPairs Dataset and 3D Online Action Dataset. Experimental results show that our method achieves state-of-the-art performance.", "title": "" }, { "docid": "441e0a882bafc17a75fe9e2dbf3634f1", "text": "Cloud computing focuses on delivery of reliable, secure, faulttolerant, sustainable, and scalable infrastructures for hosting internet-based application services. These applications have different composition, configuration, and deployment requirements. Cloud service providers are willing to provide large scaled computing infrastructure at a cheap prices. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. This problem can be tackle with the help of mobile agents. Mobile agent being a process that can transport its state from one environment to another, with its data intact, and is capable of performing appropriately in the new environment. This work proposes an agent based framework for providing scalability in cloud computing environments supported with algorithms for searching another cloud when the approachable cloud becomes overloaded and for searching closest datacenters with least response time of virtual machine (VM).", "title": "" }, { "docid": "d9a113b6b09874a4cbd9bf2f006504a6", "text": "Attracting, motivating and retaining knowledge workers have become important in a knowledge-based and tight labour market, where changing knowledge management practices and global convergence of technology has redefined the nature of work. While individualisation of employment practices and team-based work may provide personal and organisational flexibilities, aligning HR and organisational strategies for competitive advantage has become more prominent. This exploratory study identifies the most and least effective HR strategies used by knowledge intensive firms (KIFs) in Singapore for attracting, motivating and retaining these workers. The most popular strategies were not always the most effective, and there appear to be distinctive ‘bundles’ of HR practices for managing knowledge workers. These vary according to whether ownership is foreign or local. A schema, based on statistically significant findings, for improving the effectiveness of these practices in managing knowledge workers is proposed. Cross-cultural research is necessary to establish the extent of diffusion of these practices. Contact: Frank M. Horwitz, Graduate School of Business, Breakwater Campus, University of Cape Town, Private Bag Rondebosch, Cape Town 7700 South Africa. Email: fhorwitz@gsb.uct.ac.za", "title": "" }, { "docid": "17d46377e67276ec3e416d6da4bb4965", "text": "There is an increasing trend of people leaving digital traces through social media. This reality opens new horizons for urban studies. With this kind of data, researchers and urban planners can detect many aspects of how people live in cities and can also suggest how to transform cities into more efficient and smarter places to live in. In particular, their digital trails can be used to investigate tastes of individuals, and what attracts them to live in a particular city or to spend their vacation there. In this paper we propose an unconventional way to study how people experience the city, using information from geotagged photographs that people take at different locations. We compare the spatial behavior of residents and tourists in 10 most photographed cities all around the world. The study was conducted on both a global and local level. On the global scale we analyze the 10 most photographed cities and measure how attractive each city is for people visiting it from other cities within the same country or from abroad. For the purpose of our analysis we construct the users’ mobility network and measure the strength of the links between each pair of cities as a level of attraction of people living in one city (i.e., origin) to the other city (i.e., destination). On the local level we study the spatial distribution of user activity and identify the photographed hotspots inside each city. The proposed methodology and the results of our study are a low cost mean to characterize touristic activity within a certain location and can help cities strengthening their touristic potential.", "title": "" }, { "docid": "48844037619734b041c03a4bc7c680ba", "text": "Surfactants are compounds that reduce the surface tension of a liquid, the interfacial tension between two liquids, or that between a liquid and a solid. Surfactants are characteristically organic compounds containing both hydrophobic groups (their tails) and hydrophilic groups (their heads). Therefore, a surfactant molecule contains both a water insoluble (and oil soluble component) and a water soluble component. Biosurfactants encompass the properties of dropping surface tension, stabilizing emulsions, promoting foaming and are usually non-toxic and biodegradable. Interest in microbial surfactants has been progressively escalating in recent years due to their diversity, environmentally friendly nature, possibility of large-scale production, selectivity, performance under intense circumstances and their impending applications in environmental fortification. These molecules have a potential to be used in a variety of industries like cosmetics, pharmaceuticals, humectants, food preservatives and detergents. Presently the production of biosurfactants is highly expensive due to the use of synthetic culture media. Therefore, greater emphasis is being laid on procurement of various cheap agro-industrial substrates including vegetable oils, distillery and dairy wastes, soya molasses, animal fat, waste and starchy waste as raw materials. These wastes can be used as substrates for large-scale production of biosurfactants with advanced technology which is the matter of future research. This review article represents an exhaustive evaluation of the raw materials, with respect to their commercial production, fermentation mechanisms, current developments and future perspectives of a variety of approaches of biosurfactant production.", "title": "" }, { "docid": "bf24ab9d5d78287ce9da9b455b779ed3", "text": "Spatial selective attention and spatial working memory have largely been studied in isolation. Studies of spatial attention have provided clear evidence that observers can bias visual processing towards specific locations, enabling faster and better processing of information at those locations than at unattended locations. We present evidence supporting the view that this process of visual selection is a key component of rehearsal in spatial working memory. Thus, although working memory has sometimes been depicted as a storage system that emerges 'downstream' of early sensory processing, current evidence suggests that spatial rehearsal recruits top-down processes that modulate the earliest stages of visual analysis.", "title": "" }, { "docid": "dd0319de90cd0e58a9298a62c2178b25", "text": "The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. This paper presents a novel hybrid automatic approach for the extraction of retinal image vessels. The method consists in the application of mathematical morphology and a fuzzy clustering algorithm followed by a purification procedure. In mathematical morphology, the retinal image is smoothed and strengthened so that the blood vessels are enhanced and the background information is suppressed. The fuzzy clustering algorithm is then employed to the previous enhanced image for segmentation. After the fuzzy segmentation, a purification procedure is used to reduce the weak edges and noise, and the final results of the blood vessels are consequently achieved. The performance of the proposed method is compared with some existing segmentation methods and hand-labeled segmentations. The approach has been tested on a series of retinal images, and experimental results show that our technique is promising and effective.", "title": "" }, { "docid": "a34825f20b645a146857c1544c08e66e", "text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.", "title": "" }, { "docid": "7d14d06a67a87006ac271c16b1c91b16", "text": "Anti-malware vendors receive daily thousands of potentially malicious binaries to analyse and categorise before deploying the appropriate defence measure. Considering the limitations of existing malware analysis and classification methods, we present MalClassifier, a novel privacy-preserving system for the automatic analysis and classification of malware using network flow sequence mining. MalClassifier allows identifying the malware family behind detected malicious network activity without requiring access to the infected host or malicious executable reducing overall response time. MalClassifier abstracts the malware families' network flow sequence order and semantics behaviour as an n-flow. By mining and extracting the distinctive n-flows for each malware family, it automatically generates network flow sequence behaviour profiles. These profiles are used as features to build supervised machine learning classifiers (K-Nearest Neighbour and Random Forest) for malware family classification. We compute the degree of similarity between a flow sequence and the extracted profiles using a novel fuzzy similarity measure that computes the similarity between flows attributes and the similarity between the order of the flow sequences. For classifier performance evaluation, we use network traffic datasets of ransomware and botnets obtaining 96% F-measure for family classification. MalClassifier is resilient to malware evasion through flow sequence manipulation, maintaining the classifier's high accuracy. Our results demonstrate that this type of network flow-level sequence analysis is highly effective in malware family classification, providing insights on reoccurring malware network flow patterns.", "title": "" } ]
scidocsrr
bd95f017591ade84d174f8849e744261
Efficient implementation of sorting on multi-core SIMD CPU architecture
[ { "docid": "a1a81d420ef5702483859b01633bb14c", "text": "Many sorting algorithms have been studied in the past, but there are only a few algorithms that can effectively exploit both SIMD instructions and thread-level parallelism. In this paper, we propose a new parallel sorting algorithm, called aligned-access sort (AA-sort), for shared-memory multi processors. The AA-sort algorithm takes advantage of SIMD instructions. The key to high performance is eliminating unaligned memory accesses that would reduce the effectiveness of SIMD instructions. We implemented and evaluated the AA-sort on PowerPCreg 970MP and Cell Broadband Enginetrade. In summary, a sequential version of the AA-sort using SIMD instructions outperformed IBM's optimized sequential sorting library by 1.8 times and GPUTeraSort using SIMD instructions by 3.3 times on PowerPC 970MP when sorting 32 M of random 32-bit integers. Furthermore, a parallel version of AA-sort demonstrated better scalability with increasing numbers of cores than a parallel version of GPUTeraSort on both platforms.", "title": "" } ]
[ { "docid": "bf89c380e3ce667f4be2e12685f3d583", "text": "Prosocial behaviors are an aspect of adolescents’ positive development that has gained greater attention in the developmental literature since the 1990s. In this article, the authors review the literature pertaining to prosocial behaviors during adolescence. The authors begin by defining prosocial behaviors as prior theory and empirical studies have done. They describe antecedents to adolescents’ prosocial behaviors with a focus on two primary factors: socialization and cultural orientations. Accordingly, the authors review prior literature on prosocial behaviors among different ethnic/cultural groups throughout this article. As limited studies have examined prosocial behaviors among some specific ethnic groups, the authors conclude with recommendations for future research. Adolescence is a period of human development marked by several biological, cognitive, and social transitions. Physical changes, such as the onset of puberty and rapid changes in body composition (e.g., height, weight, and sex characteristics) prompt adolescents to engage in greater self-exploration (McCabe and Ricciardelli, 2003). Enhanced cognitive abilities permit adolescents to engage in more symbolic thinking and to contemplate abstract concepts, such as the self and one’s relationship to others (Kuhn, 2009; Steinberg, 2005). Furthermore, adolescence is marked with increased responsibilities at home and in the school context, opportunities for caregiving within the family, and mutuality in peer relationships (American Psychological Association, 2008). Moreover, society demands a greater level of psychosocial maturity and expects greater adherence to social norms from adolescents compared to children (Eccles et al., 2008). Therefore, adolescence presents itself as a time of major life transitions. In light of these myriad transitions, adolescents are further developing prosocial behaviors. Although the emergence of prosocial behaviors (e.g., expressed behaviors that are intended to benefit others) begins in early childhood, the developmental transitions described above allow adolescents to become active agents in their own developmental process. Behavior that is motivated by adolescents’ concern for others is thought to reflect optimal social functioning or prosocial behaviors (American Psychological Association, 2008). While the early literature focused primarily on prosocial behaviors among young children (e.g., Garner, 2006; Garner et al., 2008; Iannotti, 1985) there are several reasons to track prosocial development into adolescence. First and foremost, individuals develop cognitive abilities that allow them to better phenomenologically process and psychologically mediate life experiences that may facilitate (e.g., completing household chores and caring for siblings) or hinder (e.g., interpersonal conflict and perceptions of institutional discrimination) prosocial development (e.g., Brown and Bigler, 2005). Adolescents express more intentionality in which activities they will engage in and become selective in where they choose to devote their energies (Mahoney et al., 2009). Finally, adolescents are afforded more opportunities to express helping behaviors in other social spheres beyond the family context, such as in schools, communities, and civic society (Yates and Youniss, 1996). Origins and Definitions of Prosocial Behaviors Since the turn of the twenty-first century, there has been growing interest in understanding the relationships that exist between the strengths of individuals and resources within communities (e.g., person 4 context) in order to identify pathways for healthy development, or to understand how adolescents’ thriving can be promoted. This line of thinking is commonly described as the positive youth development perspective (e.g., Lerner et al., 2009). Although the adolescent literature still predominantly focuses on problematic development (e.g., delinquency and risk-taking behaviors), studies on adolescents’ prosocial development have increased substantially since the 1990s (Eisenberg et al., 2009a), paralleling the paradigm shift from a deficit-based model of development to one focusing on positive attributes of youth (e.g., Benson et al., 2006; Lerner, 2005). Generally described as the expression of voluntary behaviors with the intention to benefit others (Carlo, 2006; Eisenberg, 2006; see full review by Eisenberg et al., 2009a), prosocial behavior is one aspect among others of positive adolescent development that is gaining greater attention in the literature. Theory on prosocial development is rooted in the literature on moral development, which includes cognitive aspects of moral reasoning (e.g., how individuals decide between moral dilemmas; Kohlberg, 1978), moral behaviors (e.g., expression of behaviors that benefit society; Eisenberg and Fabes, 1998), and emotions (e.g., empathy; Eisenberg and Fabes, 1990). Empirical studies on adolescents’ prosocial development have found that different types of prosocial behaviors may exist. For example, Carlo and colleagues (e.g., Carlo et al., 2010; Carlo and Randall, 2002) found six types of prosocial tendencies (intentions to help others): compliant, dire, emotional, altruistic, anonymous, and public. Compliant helping refers to an individual’s intent to assist when asked. Emotional helping refers to helping in emotionally evocative situations (e.g., witnessing another individual crying). Dire helping refers to International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 19 http://dx.doi.org/10.1016/B978-0-08-097086-8.23190-5 221 International Encyclopedia of the Social & Behavioral Sciences, Second Edition, 2015, 221–227 Author's personal copy", "title": "" }, { "docid": "541fb071299f20a242d482bc4b1f94ab", "text": "This paper describes some of the early developments in the synthetic aperture technique for radar application. The basic principle and later extensions to the theory are described. The results of the first experimental verification at the University of Illinois are given as well as the results of subsequent experiments. The paper also includes a section comparing some of the important features of real and synthetic aperture systems.", "title": "" }, { "docid": "8780b620d228498447c4f1a939fa5486", "text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.", "title": "" }, { "docid": "77320edf2d8da853b873c71e26802c6e", "text": "Content Delivery Network (CDN) services largely affect the delivery quality perceived by users. While those services were initially offered by independent entities, some large ISP now develop their own CDN activities to control costs and delivery quality. But this new activity is also a new source of revenues for those vertically integrated ISP-CDNs, which can sell those services to content providers. In this paper, we investigate the impact of having an ISP and a vertically-integrated CDN, on the main actors of the ecosystem (users, competing ISPs). Our approach is based on an economic model of revenues and costs, and a multilevel game-theoretic formulation of the interactions among actors. Our model incorporates the possibility for the vertically-integrated ISP to partially offer CDN services to competitors in order to optimize the trade-off between CDN revenue (if fully offered) and competitive advantage on subscriptions at the ISP level (if not offered to competitors). Our results highlight two counterintuitive phenomena: an ISP may prefer an independent CDN over controlling (integrating) a CDN, and from the user point of view vertical integration is preferable to an independent CDN or a no-CDN configuration. Hence, a regulator may want to elicit such CDN-ISP vertical integrations rather than prevent them.", "title": "" }, { "docid": "59a4471695fff7d42f49d94fc9755772", "text": "We introduce a computationally efficient algorithm for multi-object tracking by detection that addresses four main challenges: appearance similarity among targets, missing data due to targets being out of the field of view or occluded behind other objects, crossing trajectories, and camera motion. The proposed method uses motion dynamics as a cue to distinguish targets with similar appearance, minimize target mis-identification and recover missing data. Computational efficiency is achieved by using a Generalized Linear Assignment (GLA) coupled with efficient procedures to recover missing data and estimate the complexity of the underlying dynamics. The proposed approach works with track lets of arbitrary length and does not assume a dynamical model a priori, yet it captures the overall motion dynamics of the targets. Experiments using challenging videos show that this framework can handle complex target motions, non-stationary cameras and long occlusions, on scenarios where appearance cues are not available or poor.", "title": "" }, { "docid": "719654900a770c6d2ce5e8f1067fc29b", "text": "Facial expressions are the facial changes in response to a person’s internal emotional states, intentions, or social communications. Facial expression analysis has been an active research topic for behavioral scientists since the work of Darwin in 1872 [21, 26, 29, 83]. Suwa et al. [90] presented an early attempt to automatically analyze facial expressions by tracking the motion of 20 identified spots on an image sequence in 1978. After that, much progress has been made to build computer systems to help us understand and use this natural form of human communication [5, 7, 8, 17, 23, 32, 43, 45, 57, 64, 77, 92, 95, 106–108, 110]. In this chapter, facial expression analysis refers to computer systems that attempt to automatically analyze and recognize facial motions and facial feature changes from visual information. Sometimes the facial expression analysis has been confused with emotion analysis in the computer vision domain. For emotion analysis, higher level knowledge is required. For example, although facial expressions can convey emotion, they can also express intention, cognitive processes, physical effort, or other intraor interpersonal meanings. Interpretation is aided by context, body gesture, voice, individual differences, and cultural factors as well as by facial configuration and timing [11, 79, 80]. Computer facial expression analysis systems need to analyze the facial actions regardless of context, culture, gender, and so on.", "title": "" }, { "docid": "d67ab983c681136864f4a66c5b590080", "text": "scoring in DeepQA C. Wang A. Kalyanpur J. Fan B. K. Boguraev D. C. Gondek Detecting semantic relations in text is an active problem area in natural-language processing and information retrieval. For question answering, there are many advantages of detecting relations in the question text because it allows background relational knowledge to be used to generate potential answers or find additional evidence to score supporting passages. This paper presents two approaches to broad-domain relation extraction and scoring in the DeepQA question-answering framework, i.e., one based on manual pattern specification and the other relying on statistical methods for pattern elicitation, which uses a novel transfer learning technique, i.e., relation topics. These two approaches are complementary; the rule-based approach is more precise and is used by several DeepQA components, but it requires manual effort, which allows for coverage on only a small targeted set of relations (approximately 30). Statistical approaches, on the other hand, automatically learn how to extract semantic relations from the training data and can be applied to detect a large amount of relations (approximately 7,000). Although the precision of the statistical relation detectors is not as high as that of the rule-based approach, their overall impact on the system through passage scoring is statistically significant because of their broad coverage of knowledge.", "title": "" }, { "docid": "62fd503d151b97920bcb493ed495f0be", "text": "Powered by TCPDF (www.tcpdf.org) This material is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Athukorala, Kumaripaba; Gowacka, Dorota; Jacucci, Giulio; Oulasvirta, Antti; Vreeken, Jilles", "title": "" }, { "docid": "0efe3ccc1c45121c5167d3792a7fcd25", "text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.", "title": "" }, { "docid": "da7cc08e5fd7275d2f4194f83f1e7365", "text": "Recursive neural networks (RNN) and their recently proposed extension recursive long short term memory networks (RLSTM) are models that compute representations for sentences, by recursively combining word embeddings according to an externally provided parse tree. Both models thus, unlike recurrent networks, explicitly make use of the hierarchical structure of a sentence. In this paper, we demonstrate that RNNs nevertheless suffer from the vanishing gradient and long distance dependency problem, and that RLSTMs greatly improve over RNN’s on these problems. We present an artificial learning task that allows us to quantify the severity of these problems for both models. We further show that a ratio of gradients (at the root node and a focal leaf node) is highly indicative of the success of backpropagation at optimizing the relevant weights low in the tree. This paper thus provides an explanation for existing, superior results of RLSTMs on tasks such as sentiment analysis, and suggests that the benefits of including hierarchical structure and of including LSTM-style gating are complementary.", "title": "" }, { "docid": "ab2e9a230c9aeec350dff6e3d239c7d8", "text": "Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this paper, we aim to endow state of the art face recognition SDKs with robustness to facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression image from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases, namely Multi-PIE and AR, show significant performance improvement of the commercial SDK to deal with expression and pose variations and demonstrates the effectiveness of the proposed approach.", "title": "" }, { "docid": "553de71fcc3e4e6660015632eee751b1", "text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.", "title": "" }, { "docid": "f672df401b24571f81648066b3181890", "text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.", "title": "" }, { "docid": "b02c718acfab40a33840eec013a09bda", "text": "Smartphones today are ubiquitous source of sensitive information. Information leakage instances on the smartphones are on the rise because of exponential growth in smartphone market. Android is the most widely used operating system on smartphones. Many information flow tracking and information leakage detection techniques are developed on Android operating system. Taint analysis is commonly used data flow analysis technique which tracks the flow of sensitive information and its leakage. This paper provides an overview of existing Information flow tracking techniques based on the Taint analysis for android applications. It is observed that static analysis techniques look at the complete program code and all possible paths of execution before its run, whereas dynamic analysis looks at the instructions executed in the program-run in the real time. We provide in depth analysis of both static and dynamic taint analysis approaches.", "title": "" }, { "docid": "ba57149e82718bad622df36852906531", "text": "The classical psychedelic drugs, including psilocybin, lysergic acid diethylamide and mescaline, were used extensively in psychiatry before they were placed in Schedule I of the UN Convention on Drugs in 1967. Experimentation and clinical trials undertaken prior to legal sanction suggest that they are not helpful for those with established psychotic disorders and should be avoided in those liable to develop them. However, those with so-called 'psychoneurotic' disorders sometimes benefited considerably from their tendency to 'loosen' otherwise fixed, maladaptive patterns of cognition and behaviour, particularly when given in a supportive, therapeutic setting. Pre-prohibition studies in this area were sub-optimal, although a recent systematic review in unipolar mood disorder and a meta-analysis in alcoholism have both suggested efficacy. The incidence of serious adverse events appears to be low. Since 2006, there have been several pilot trials and randomised controlled trials using psychedelics (mostly psilocybin) in various non-psychotic psychiatric disorders. These have provided encouraging results that provide initial evidence of safety and efficacy, however the regulatory and legal hurdles to licensing psychedelics as medicines are formidable. This paper summarises clinical trials using psychedelics pre and post prohibition, discusses the methodological challenges of performing good quality trials in this area and considers a strategic approach to the legal and regulatory barriers to licensing psychedelics as a treatment in mainstream psychiatry. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.", "title": "" }, { "docid": "d2edbca2ed1e4952794d97f6e34e02e4", "text": "In today’s world, almost everybody is affluent with computers and network based technology is growing by leaps and bounds. So, network security has become very important, rather an inevitable part of computer system. An Intrusion Detection System (IDS) is designed to detect system attacks and classify system activities into normal and abnormal form. Machine learning techniques have been applied to intrusion detection systems which have an important role in detecting Intrusions. This paper reviews different machine approaches for Intrusion detection system. This paper also presents the system design of an Intrusion detection system to reduce false alarm rate and improve accuracy to detect intrusion.", "title": "" }, { "docid": "7af1ddcefae86ffa989ddd106f032002", "text": "In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that “Some people are gay” is toxic while “Some people are straight” is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classifiers, and describe its relationship with group fairness. Further, we offer three approaches, blindness, counterfactual augmentation, and counterfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we find that blindness and CLP address counterfactual token fairness. The methods do not harm classifier performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classification.", "title": "" }, { "docid": "4adcbd3cdb868406a7e191063ac91573", "text": "In recent years, the increasing diffusion of malicious software has encouraged the adoption of advanced machine learning algorithms to timely detect new threats. A cloud-based approach allows to exploit the big data produced by client agents to train such algorithms, but on the other hand, poses severe challenges on their scalability and performance. We propose a hybrid cloud-based malware detection system in which static and dynamic analyses are combined in order to find a good trade-off between response time and detection accuracy. Our system performs a continuous learning process of its models, based on deep networks, by exploiting the growing amount of data provided by clients. The preliminary experimental evaluation confirms the suitability of the approach proposed here.", "title": "" }, { "docid": "1f53e890c8a1b9c9a8ae450ecde0de8a", "text": "BACKGROUND AND OBJECTIVE\nThe identification and quantification of potential drug-drug interactions is important for avoiding or minimizing the interaction-induced adverse events associated with specific drug combinations. Clinical studies in healthy subjects were performed to evaluate potential pharmacokinetic interactions between vortioxetine (Lu AA21004) and co-administered agents, including fluconazole (cytochrome P450 [CYP] 2C9, CYP2C19 and CYP3A inhibitor), ketoconazole (CYP3A and P-glycoprotein inhibitor), rifampicin (CYP inducer), bupropion (CYP2D6 inhibitor and CYP2B6 substrate), ethinyl estradiol/levonorgestrel (CYP3A substrates) and omeprazole (CYP2C19 substrate and inhibitor).\n\n\nMETHODS\nThe ratio of central values of the test treatment to the reference treatment for relevant parameters (e.g., area under the plasma concentration-time curve [AUC] and maximum plasma concentration [C max]) was used to assess pharmacokinetic interactions.\n\n\nRESULTS\nCo-administration of vortioxetine had no effect on the AUC or C max of ethinyl estradiol/levonorgestrel or 5'-hydroxyomeprazole, or the AUC of bupropion; the 90 % confidence intervals for these ratios of central values were within 80-125 %. Steady-state AUC and C max of vortioxetine increased when co-administered with bupropion (128 and 114 %, respectively), fluconazole (46 and 15 %, respectively) and ketoconazole (30 and 26 %, respectively), and decreased by 72 and 51 %, respectively, when vortioxetine was co-administered with rifampicin. Concomitant therapy was generally well tolerated; most adverse events were mild or moderate in intensity.\n\n\nCONCLUSION\nDosage adjustment may be required when vortioxetine is co-administered with bupropion or rifampicin.", "title": "" }, { "docid": "1a7a66f5d4f2ea918a9267ee24c57586", "text": "Elements associated with total suspended particulate matter (TSP) in Jeddah city were determined. Using high-volume samplers, TSP samples were simultaneously collected over a one-year period from seven sampling sites. Samples were analyzed for Al, Ba, Ca, Cu, Mg, Fe, Mn, Zn, Ti, V, Cr, Co, Ni, As, and Sr. Results revealed great dependence of element contents on spatial and temporal variations. Two sites characterized by busy roads, workshops, heavy population, and heavy trucking have high levels of all measured elements. Concentrations of most elements at the two sites exhibit strong spatial gradients and concentrations of elements at these sites are higher than other locations. The highest concentrations of elements were observed during June-August because of dust storms, significant increase in energy consumption, and active surface winds. Enrichment factors of elements at the high-level sites have values in the range >10~60 while for Cu and Zn the enrichment factors are much higher (~0->700) indicating that greater percentage of TSP composition for these three elements in air comes from anthropogenic activities.", "title": "" } ]
scidocsrr
dd7386b2391436fd2b5bdf780407e22e
Position based fluids
[ { "docid": "1dbb04e806b1fd2a8be99633807d9f4d", "text": "Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.", "title": "" } ]
[ { "docid": "b861ea3b6ea6d29e1c225609db069fd5", "text": "A single probe feeding stacked microstrip antenna is presented to obtain dual-band circularly polarized (CP) characteristics using double layers of truncated square patches. The antenna operates at both the L1 and L2 frequencies of 1575 and 1227 MHz for the global positioning system (GPS). With the optimized design, the measured axial ratio (AR) bandwidths with the centre frequencies of L1 and L2 are both greater than 50 MHz, while the impedance characteristics within AR bandwidth satisfy the requirement of VSWR less than 2. At L1 and L2 frequencies, the AR measured is 0.7 dB and 0.3 dB, respectively.", "title": "" }, { "docid": "1b6dfa953ee044fceb17640cc862a534", "text": "Introduction The rapid pace at which the technological innovations are being introduced in the world poses a potential challenge to the retailer, supplier, and enterprises. In the field of Information Technology (IT) there is a rapid growth in the last 30 years (Want 2006; Landt 2005). One of the most promising technological innovations in IT is radio frequency identification (RFID) (Dutta et al. 2007; Whitaker et al. 2007; Bottani et al. 2009). The RFID technology was evolved in 1945 as an espionage tool invented by Leon Theremin for the Soviet Government (Nikitin et al. 2013, Tedjini et al. 2012). At that time it was mainly used by the military. The progress in microchip design, antenna technology and radio spread spectrum pushed it into various applications like supply chain management, retail, automatic toll collection by tunnel companies, animal tracking, ski lift access, tracking library books, theft prevention, vehicle immobilizer systems, railway rolling stock identification, movement tracking, security, healthcare, printing, textiles and clothing (Weinstein 2005; Liu and Miao 2006; Rao et al. 2005; Wu et al. 2009; Tan 2008). RFID can make the companies more competitive by changing the related processes in supply chain, manufacturing and retailing. Abstract", "title": "" }, { "docid": "bf7878378a3e99d9d7044ba5c4885774", "text": "Literature examining the effects of aerobic exercise training on excess postexercise oxygen consumption (EPOC) is sparse. In this study, 9 male participants (19-32 yr) trained (EX) for 12 wk, and 10 in a control group (CON) maintained normal activity. VO(2max), rectal temperature (T(re)), epinephrine, norepinephrine, free fatty acids (FFA), insulin, glucose, blood lactate (BLA), and EPOC were measured before (PRE) and after (POST) the intervention. EPOC at PRE was measured for 120 min after 30 min of treadmill running at 70% VO(2max). EX completed 2 EPOC trials at POST, i.e., at the same absolute (ABS) and relative (REL) intensity; 1 EPOC test for CON served as both the ABS and REL trial because no significant change in VO(2max) was noted. During the ABS trial, total EPOC decreased significantly (p < .01) from PRE (39.4 ± 3.6 kcal) to POST (31.7 ± 2.2 kcal). T(re), epinephrine, insulin, glucose, and BLA at end-exercise or during recovery were significantly lower and FFA significantly higher after training. Training did not significantly affect EPOC during the REL trial; however, epinephrine was significantly lower, and norepinephrine and FFA, significantly higher, at endexercise after training. Results indicate that EPOC varies as a function of relative rather than absolute metabolic stress and that training improves the efficiency of metabolic regulation during recovery from exercise. Mechanisms for the decreased magnitude of EPOC in the ABS trial include decreases in BLA, T(re), and perhaps epinephrine-mediated hepatic glucose production and insulin-mediated glucose uptake.", "title": "" }, { "docid": "3bf954a23ea3e7d5326a7b89635f966a", "text": "The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.", "title": "" }, { "docid": "ad5005bc593b0fbddfe483732b30fe5e", "text": "Recent multi-agent extensions of Q-Learning require knowledge of other agents’ payoffs and Q-functions, and assume game-theoretic play at all times by all other agents. This paper proposes a fundamentally different approach, dubbed “Hyper-Q” Learning, in which values of mixed strategies rather than base actions are learned, and in which other agents’ strategies are estimated from observed actions via Bayesian inference. Hyper-Q may be effective against many different types of adaptive agents, even if they are persistently dynamic. Against certain broad categories of adaptation, it is argued that Hyper-Q may converge to exact optimal time-varying policies. In tests using Rock-Paper-Scissors, Hyper-Q learns to significantly exploit an Infinitesimal Gradient Ascent (IGA) player, as well as a Policy Hill Climber (PHC) player. Preliminary analysis of Hyper-Q against itself is also presented.", "title": "" }, { "docid": "24f6ccacee504550274f750ddd329b3a", "text": "Book recommendations are of great significance in colleges and universities. Although current recommendation approaches have made significant achievements, these approaches do not consider college students’ similar learning trajectories in the same major. In order to recommend books more accurately, mining the knowledge system is very crucial for college students in the same major. This paper proposes a personalized book recommendation algorithm that is based on the time sequential collaborative filtering recommendation, combined with students’ learning trajectories. In order to recommend books effectively, our algorithm leverages space distance. In this algorithm, we consider two important characteristics: the time sequence information of borrowing books and the circulation times of books. Our experimental results demonstrate that our book recommendation algorithm is in accordance with the college students’ demand for professional learning.", "title": "" }, { "docid": "12f2cd73d0b8bf034f6220f1201f79ee", "text": "Web-based tourism information systems are more and more required to provide besides traditional tourism information about hotel facilities and infrastructure also cultural content comprising material heritage, performing art, folk tradition, handicraft or simply habits of everyday life. These cultural Web applications are required not to offer online brochures only, but rather to provide both, value and service. This paper focuses on two crucial aspects of cultural Web applications comprising quality of content and quality of access. As an example for achieving quality of content in terms of comprehensiveness and cross-national nature, the MEDINA portal is presented, allowing one-stop access to cultural information of fourteen Mediterranean countries. In order to provide quality of access, the notion of ubiquity is introduced, allowing to customize Web applications towards different kinds of contexts, thus supporting the cultural tourist with device-independent, time-aware, location-aware, and personalized services.", "title": "" }, { "docid": "2820f1623ab5c17e18c8a237156c2d36", "text": "In a two-tier heterogeneous network (HetNet) where small base stations (SBSs) coexist with macro base stations (MBSs), the SBSs may suffer significant performance degradation due to the inter- and intra-tier interferences. Introducing cognition into the SBSs through the spectrum sensing (e.g., carrier sensing) capability helps them detecting the interference sources and avoiding them via opportunistic access to orthogonal channels. In this paper, we use stochastic geometry to model and analyze the performance of two cases of cognitive SBSs in a multichannel environment, namely, the semi-cognitive case and the full-cognitive case. In the semi-cognitive case, the SBSs are only aware of the interference from the MBSs, hence, only inter-tier interference is minimized. On the other hand, in the full-cognitive case, the SBSs access the spectrum via a contention resolution process, hence, both the intra- and intertier interferences are minimized, but at the expense of reduced spectrum access opportunities. We quantify the performance gain in outage probability obtained by introducing cognition into the small cell tier for both the cases. We will focus on a special type of SBSs called the femto access points (FAPs) and also capture the effect of different admission control policies, namely, the open-access and closed-access policies. We show that a semi-cognitive SBS always outperforms a full-cognitive SBS and that there exists an optimal spectrum sensing threshold for the cognitive SBSs which can be obtained via the analytical framework presented in this paper.", "title": "" }, { "docid": "66aff99642972dbe0280c83e4d702e96", "text": "We develop a workload model based on the observed behavior of parallel computers at the San Diego Supercomputer Center and the Cornell Theory Center. This model gives us insight into the performance of strategies for scheduling moldable jobs on space-sharing parallel computers. We find that Adaptive Static Partitioning (ASP), which has been reported to work well for other workloads, does not perform as well as strategies that adapt better to system load. The best of the strategies we consider is one that explicitly reduces allocations when load is high (a variation of Sevcik's (1989) A+ strategy).", "title": "" }, { "docid": "fcf8649ff7c2972e6ef73f837a3d3f4d", "text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.", "title": "" }, { "docid": "8edc51b371d7551f9f7e69149cd4ece0", "text": "Though many previous studies has proved the importance of trust from various perspectives, the researches about online consumer’s trust are fragmented in nature and still it need more attention from academics. Lack of consumers trust in online systems is a critical impediment to the success of e-Commerce. Therefore it is important to explore the critical factors that affect the formation of user’s trust in online environments. The main objective of this paper is to analyze the effects of various antecedents of online trust and to predict the user’s intention to engage in online transaction based on their trust in the Information systems. This study is conducted among Asian online consumers and later the results were compared with those from Non-Asian regions. Another objective of this paper is to integrate De Lone and McLean model of IS Success and Technology Acceptance Model (TAM) for measuring the significance of online trust in e-Commerce adoption. The results of this study show that perceived security, perceived privacy, vendor familiarity, system quality and service quality are the significant antecedents of online trust in a B2C e-Commerce context.", "title": "" }, { "docid": "0f49df994b3bc963d42c960a46137e0d", "text": "Finding the best makeup for a given human face is an art in its own right. Experienced makeup artists train for years to be skilled enough to propose a best-fit makeup for an individual. In this work we propose a system that automates this task. We acquired the appearance of 56 human faces, both without and with professional makeup. To this end, we use a controlled-light setup, which allows to capture detailed facial appearance information, such as diffuse reflectance, normals, subsurface-scattering, specularity, or glossiness. A 3D morphable face model is used to obtain 3D positional information and to register all faces into a common parameterization. We then define makeup to be the change of facial appearance and use the acquired database to find a mapping from the space of human facial appearance to makeup. Our main application is to use this mapping to suggest the best-fit makeup for novel faces that are not in the database. Further applications are makeup transfer, automatic rating of makeup, makeup-training, or makeup-exaggeration. As our makeup representation captures a change in reflectance and scattering, it allows us to synthesize faces with makeup in novel 3D views and novel lighting with high realism. The effectiveness of our approach is further validated in a user-study.", "title": "" }, { "docid": "6d4452696b5b87bd640f7a11283f9963", "text": "This paper describes the development of a chess-playing robot called MarineBlue. This robot consists of three components: a computer vision component to recognize chess board situations, a chess engine component to compute new moves and a robot control component to execute these moves by means of a robot arm. In the paper, we focus on the algorithms that have been used to implement the computer vision and robot control components. The MarineBlue robot is fully autonomous, in the sense that it can recognize the moves done by a user, calculate a move in response to the user’s move and control a robot arm to perform this calculated move. The robot that was used to develop MarineBlue is a low-cost, educational robot, which results in a cost-effective and compact chess-playing robot.", "title": "" }, { "docid": "108a3f06052f615a7ebfc561c3c87cfc", "text": "There are an estimated 0.5-1 million mite species on earth. Among the many mites that are known to affect humans and animals, only a subset are parasitic but these can cause significant disease. We aim here to provide an overview of the most recent work in this field in order to identify common biological features of these parasites and to inform common strategies for future research. There is a critical need for diagnostic tools to allow for better surveillance and for drugs tailored specifically to the respective parasites. Multi-'omics' approaches represent a logical and timely strategy to identify the appropriate mite molecules. Recent advances in sequencing technology enable us to generate de novo genome sequence data, even from limited DNA resources. Consequently, the field of mite genomics has recently emerged and will now rapidly expand, which is a particular advantage for parasitic mites that cannot be cultured in vitro. Investigations of the microbiota associated with mites will elucidate the link between parasites and pathogens, and define the role of the mite in transmission and pathogenesis. The databases generated will provide the crucial knowledge essential to design novel diagnostic tools, control measures, prophylaxes, drugs and immunotherapies against the mites and associated secondary infections.", "title": "" }, { "docid": "6625c08d03f755550f2a34086b4ae600", "text": "The general requirement in the automotive radar application is to measure the target range R and radial velocity vr simultaneously and unambiguously with high accuracy and resolution even in multitarget situations, which is a matter of the appropriate waveform design. Based on a single continuous wave chirp transmit signal, target range R and radial velocity vr cannot be measured in an unambiguous way. Therefore a so-called multiple frequency shift keying (MFSK) transmit signal was developed, which is applied to measure target range and radial velocity separately and simultaneously. In this case the radar measurement is based on a frequency and additionally on a phase measurement, which suffers from a lower estimation accuracy compared with a pure frequency measurement. This MFSK waveform can therefore be improved and outperformed by a chirp sequences waveform. Each chirp signal has in this case very short time duration Tchirp. Therefore the measured beat frequency fB is dominated by target range R and is less influenced by the radial velocity vr. The range and radial velocity estimation is based on two separate frequency measurements with high accuracy in both cases. Classical chirp sequence waveforms suffer from possible ambiguities in the velocity measurement. It is the objective of this paper to modify the classical chirp sequence to get an unambiguous velocity measurement even in multitarget situations.", "title": "" }, { "docid": "120007860a5fbf6a3bbc9b2fe6074b87", "text": "For the last few decades, optimization has been developing at a fast rate. Bio-inspired optimization algorithms are metaheuristics inspired by nature. These algorithms have been applied to solve different problems in engineering, economics, and other domains. Bio-inspired algorithms have also been applied in different branches of information technology such as networking and software engineering. Time series data mining is a field of information technology that has its share of these applications too. In previous works we showed how bio-inspired algorithms such as the genetic algorithms and differential evolution can be used to find the locations of the breakpoints used in the symbolic aggregate approximation of time series representation, and in another work we showed how we can utilize the particle swarm optimization, one of the famous bio-inspired algorithms, to set weights to the different segments in the symbolic aggregate approximation representation. In this paper we present, in two different approaches, a new meta optimization process that produces optimal locations of the breakpoints in addition to optimal weights of the segments. The experiments of time series classification task that we conducted show an interesting example of how the overfitting phenomenon, a frequently encountered problem in data mining which happens when the model overfits the training set, can interfere in the optimization process and hide the superior performance of an optimization algorithm.", "title": "" }, { "docid": "4c8ac629f8a7faaa315e4e4441eb630c", "text": "This article reviews the cognitive therapy of depression. The psychotherapy based on this theory consists of behavioral and verbal techniques to change cognitions, beliefs, and errors in logic in the patient's thinking. A few of the various techniques are described and a case example is provided. Finally, the outcome studies testing the efficacy of this approach are reviewed.", "title": "" }, { "docid": "65a4ec1b13d740ae38f7b896edb2eaff", "text": "The problem of evolutionary network analysis has gained increasing attention in recent years, because of an increasing number of networks, which are encountered in temporal settings. For example, social networks, communication networks, and information networks continuously evolve over time, and it is desirable to learn interesting trends about how the network structure evolves over time, and in terms of other interesting trends. One challenging aspect of networks is that they are inherently resistant to parametric modeling, which allows us to truly express the edges in the network as functions of time. This is because, unlike multidimensional data, the edges in the network reflect interactions among nodes, and it is difficult to independently model the edge as a function of time, without taking into account its correlations and interactions with neighboring edges. Fortunately, we show that it is indeed possible to achieve this goal with the use of a matrix factorization, in which the entries are parameterized by time. This approach allows us to represent the edge structure of the network purely as a function of time, and predict the evolution of the network over time. This opens the possibility of using the approach for a wide variety of temporal network analysis problems, such as predicting future trends in structures, predicting links, and node-centric anomaly/event detection. This flexibility is because of the general way in which the approach allows us to express the structure of the network as a function of time. We present a number of experimental results on a number of temporal data sets showing the effectiveness of the approach.", "title": "" }, { "docid": "71296a25cda3991333cd78fba7a85fa7", "text": "In the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics.", "title": "" }, { "docid": "dd4a95a6ffdb1a1c5c242b7a5d969d29", "text": "A microstrip antenna with frequency agility and polarization diversity is presented. Commercially available packaged RF microelectrical-mechanical (MEMS) single-pole double-throw (SPDT) devices are used with a novel feed network to provide four states of polarization control; linear-vertical, linear-horizontal, left-hand circular and right-handed circular. Also, hyper-abrupt silicon junction tuning diodes are used to tune the antenna center frequency from 0.9-1.5 GHz. The microstrip antenna is 1 in x 1 in, and is fabricated on a 4 in x 4 in commercial-grade dielectric laminate. To the authors' knowledge, this is the first demonstration of an antenna element with four polarization states across a tunable bandwidth of 1.4:1.", "title": "" } ]
scidocsrr
c89edb9e500b001b24b13c0137cd12f5
Sentence Level Recurrent Topic Model: Letting Topics Speak for Themselves
[ { "docid": "3d12dea4ae76c5af54578262996fe0bb", "text": "We introduce a two-layer undirected graphical model, calle d a “Replicated Softmax”, that can be used to model and automatically extract low -dimensional latent semantic representations from a large unstructured collec ti n of documents. We present efficient learning and inference algorithms for thi s model, and show how a Monte-Carlo based method, Annealed Importance Sampling, c an be used to produce an accurate estimate of the log-probability the model a ssigns to test data. This allows us to demonstrate that the proposed model is able to g neralize much better compared to Latent Dirichlet Allocation in terms of b th the log-probability of held-out documents and the retrieval accuracy.", "title": "" }, { "docid": "d0f71092df2eab53e7f32eff1cb7af2e", "text": "Topic modeling of textual corpora is an important and challenging problem. In most previous work, the “bag-of-words” assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.", "title": "" }, { "docid": "120e36cc162f4ce602da810c80c18c7d", "text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.", "title": "" } ]
[ { "docid": "79fc27c21305f5ff2db35f3529db8909", "text": "Bundling techniques provide a visual simplification of a graph drawing or trail set, by spatially grouping similar graph edges or trails. This way, the structure of the visualization becomes simpler and thereby easier to comprehend in terms of assessing relations that are encoded by such paths, such as finding groups of strongly interrelated nodes in a graph, finding connections between spatial regions on a map linked by a number of vehicle trails, or discerning the motion structure of a set of objects by analyzing their paths. In this state of the art report, we aim to improve the understanding of graph and trail bundling via the following main contributions. First, we propose a data-based taxonomy that organizes bundling methods on the type of data they work on (graphs vs trails, which we refer to as paths). Based on a formal definition of path bundling, we propose a generic framework that describes the typical steps of all bundling algorithms in terms of high-level operations and show how existing method classes implement these steps. Next, we propose a description of tasks that bundling aims to address. Finally, we provide a wide set of example applications of bundling techniques and relate these to the above-mentioned taxonomies. Through these contributions, we aim to help both researchers and users to understand the bundling landscape as well as its technicalities.", "title": "" }, { "docid": "9ec7b122117acf691f3bee6105deeb81", "text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.", "title": "" }, { "docid": "65b843c30f69d33fa0c9aedd742e3434", "text": "The computational study of complex systems increasingly requires model integration. The drivers include a growing interest in leveraging accepted legacy models, an intensifying pressure to reduce development costs by reusing models, and expanding user requirements that are best met by combining different modeling methods. There have been many published successes including supporting theory, conceptual frameworks, software tools, and case studies. Nonetheless, on an empirical basis, the published work suggests that correctly specifying model integration strategies remains challenging. This naturally raises a question that has not yet been answered in the literature, namely 'what is the computational difficulty of model integration?' This paper's contribution is to address this question with a time and space complexity analysis that concludes that deep model integration with proven correctness is both NP-complete and PSPACE-complete and that reducing this complexity requires sacrificing correctness proofs in favor of guidance from both subject matter experts and modeling specialists.", "title": "" }, { "docid": "9b07a147a3492d53a6a996697f66a342", "text": "We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.", "title": "" }, { "docid": "71b5c8679979cccfe9cad229d4b7a952", "text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "title": "" }, { "docid": "65aed4d07ba558da05d3458884d8b67b", "text": "This paper proposes an input voltage sensorless control algorithm for three-phase active boost rectifiers. Using this approach, the input ac-phase voltages can be accurately estimated from the fluctuations of other measured state variables and preceding switching state information from converter dynamics. Furthermore, the proposed control strategy reduces the input current harmonics of an ac–dc three-phase boost power factor correction (PFC) converter by injecting an additional common-mode duty ratio term to the feedback controllers’ outputs. This additional duty compensation term cancels the unwanted input harmonics, caused by the floating potential between ac source neutral and dc link negative, without requiring any access to the neutral point. A 6-kW (continuous power)/10-kW (peak power) three-phase boost PFC prototype using SiC-based semiconductor switching devices is designed and developed to validate the proposed control algorithm. The experimental results show that an input power factor of 0.999 with a conversion efficiency of 98.3%, total harmonic distortion as low as 4%, and a tightly regulated dc-link voltage with 1% ripple can be achieved.", "title": "" }, { "docid": "b67a060ee362425bee06e7638763be7e", "text": "The SLC19 gene family of solute carriers is a family of three transporter proteins with significant structural similarity, transporting, however, substrates with different structure and ionic charge. The three members of this gene family are expressed ubiquitously and mediate the transport of two important water-soluble vitamins, folate and thiamine. The concentrative transport of substrates mediated by the members of this gene family is energized by transcellular H+/OH− gradient. SLC19A1 is expressed at highest levels in absorptive cells where it is located in a polarized manner either in the apical or basal membrane, depending on the cell type. It mediates the transport of reduced folate and its analogs, such as methotrexate, which are anionic at physiological pH. SLC19A2 is expressed ubiquitously and mediates the transport of thiamine, a cation at physiological pH. SLC19A3 is also widely expressed and is capable of transporting thiamine. This review summarizes the current knowledge on the structural, functional, molecular and physiological aspects of the SLC19 gene family.", "title": "" }, { "docid": "a2258145e9366bfbf515b3949b2d70fa", "text": "Affect intensity (AI) may reconcile 2 seemingly paradoxical findings: Women report more negative affect than men but equal happiness as men. AI describes people's varying response intensity to identical emotional stimuli. A college sample of 66 women and 34 men was assessed on both positive and negative affect using 4 measurement methods: self-report, peer report, daily report, and memory performance. A principal-components analysis revealed an affect balance component and an AI component. Multimeasure affect balance and AI scores were created, and t tests were computed that showed women to be as happy as and more intense than men. Gender accounted for less than 1% of the variance in happiness but over 13% in AI. Thus, depression findings of more negative affect in women do not conflict with well-being findings of equal happiness across gender. Generally, women's more intense positive emotions balance their higher negative affect.", "title": "" }, { "docid": "09e8e50db9ca9af79005013b73bbb250", "text": "The number of tools for dynamics simulation has grown in the last years. It is necessary for the robotics community to have elements to ponder which of the available tools is the best for their research. As a complement to an objective and quantitative comparison, difficult to obtain since not all the tools are open-source, an element of evaluation is user feedback. With this goal in mind, we created an online survey about the use of dynamical simulation in robotics. This paper reports the analysis of the participants’ answers and a descriptive information fiche for the most relevant tools. We believe this report will be helpful for roboticists to choose the best simulation tool for their researches.", "title": "" }, { "docid": "bfabd63b2b5b3c0b58bb2ed687994b31", "text": "Magnetotellurics is a geophysics technique for characterisation of geothermal reservoirs, mineral exploration, and other geoscience endeavours that need to sound deeply into the earth -- many kilometres or tens of kilometres. Central to its data processing is an inversion problem which currently takes several weeks on a desktop machine. In our new eScience lab, enabled by cloud computing, we parallelised an existing FORTAN program and embedded the parallel version in a cloud-based web application to improve its usability. A factor-of-five speedup has taken the time for some inversions from weeks down to days and is in use in a pre-fracturing and post-fracturing study of a new geothermal site in South Australia, an area with a high occurrence of hot dry rocks. We report on our experience with Amazon Web Services cloud services and our migration to Microsoft Azure, the collaboration between computer scientists and geophysicists, and the foundation it has laid for future work exploiting cloud data-parallel programming models.", "title": "" }, { "docid": "81b3562907a19a12f02b82f927d89dc7", "text": "Warehouse automation systems that use robots to save human labor are becoming increasingly common. In a previous study, a picking system using a multi-joint type robot was developed. However, articulated robots are not ideal in warehouse scenarios, since inter-shelf space can limit their freedom of motion. Although the use of linear motion-type robots has been suggested as a solution, their drawback is that an additional cable carrier is needed. The authors therefore propose a new configuration for a robot manipulator that uses wireless power transmission (WPT), which delivers power without physical contact except at the base of the robot arm. We describe here a WPT circuit design suitable for rotating and sliding-arm mechanisms. Overall energy efficiency was confirmed to be 92.0%.", "title": "" }, { "docid": "07310c30b78d74a1e237af4dd949d68e", "text": "The vulnerability of face, fingerprint and iris recognition systems to attacks based on morphed biometric samples has been established in the recent past. However, so far a reliable detection of morphed biometric samples has remained an unsolved research challenge. In this work, we propose the first multi-algorithm fusion approach to detect morphed facial images. The FRGCv2 face database is used to create a set of 4,808 morphed and 2,210 bona fide face images which are divided into a training and test set. From a single cropped facial image features are extracted using four types of complementary feature extraction algorithms, including texture descriptors, keypoint extractors, gradient estimators and a deep learning-based method. By performing a score-level fusion of comparison scores obtained by four different types of feature extractors, a detection equal error rate (D-EER) of 2.8% is achieved. Compared to the best single algorithm approach achieving a D-EER of 5.5%, the D-EER of the proposed multi-algorithm fusion system is al- most twice as low, confirming the soundness of the presented approach.", "title": "" }, { "docid": "18a483a6f8ce4f20a6e5209ca6dd4808", "text": "OBJECTIVE\nCurrent mainstream EEG electrode setups permit efficient recordings, but are often bulky and uncomfortable for subjects. Here we introduce a novel type of EEG electrode, which is designed for an optimal wearing comfort. The electrode is referred to as C-electrode where \"C\" stands for comfort.\n\n\nMETHODS\nThe C-electrode does not require any holder/cap for fixation on the head nor does it use traditional pads/lining of disposable electrodes - thus, it does not disturb subjects. Fixation of the C-electrode on the scalp is based entirely on the adhesive interaction between the very light C-electrode/wire construction (<35 mg) and a droplet of EEG paste/gel. Moreover, because of its miniaturization, both C-electrode (diameter 2-3mm) and a wire (diameter approximately 50 microm) are minimally (or not at all) visible to an external observer. EEG recordings with standard and C-electrodes were performed during rest condition, self-paced movements and median nerve stimulation.\n\n\nRESULTS\nThe quality of EEG recordings for all three types of experimental conditions was similar for standard and C-electrodes, i.e., for near-DC recordings (Bereitschaftspotential), standard rest EEG spectra (1-45 Hz) and very fast oscillations approximately 600 Hz (somatosensory evoked potentials). The tests showed also that once being placed on a subject's head, C-electrodes can be used for 9h without any loss in EEG recording quality. Furthermore, we showed that C-electrodes can be effectively utilized for Brain-Computer Interfacing. C-electrodes proved to posses a high stability of mechanical fixation (stayed attached with 2.5 g accelerations). Subjects also reported not having any tactile sensations associated with wearing of C-electrodes.\n\n\nCONCLUSION\nC-electrodes provide optimal wearing comfort without any loss in the quality of EEG recordings.\n\n\nSIGNIFICANCE\nWe anticipate that C-electrodes can be used in a wide range of clinical, research and emerging neuro-technological environments.", "title": "" }, { "docid": "fba577baeb7fea4ce2fd4e768982e642", "text": "Teravoxel volume electron microscopy data sets from neural tissue can now be acquired in weeks, but data analysis requires years of manual labor. We developed the SyConn framework, which uses deep convolutional neural networks and random forest classifiers to infer a richly annotated synaptic connectivity matrix from manual neurite skeleton reconstructions by automatically identifying mitochondria, synapses and their types, axons, dendrites, spines, myelin, somata and cell types. We tested our approach on serial block-face electron microscopy data sets from zebrafish, mouse and zebra finch, and computed the synaptic wiring of songbird basal ganglia. We found that, for example, basal-ganglia cell types with high firing rates in vivo had higher densities of mitochondria and vesicles and that synapse sizes and quantities scaled systematically, depending on the innervated postsynaptic cell types.", "title": "" }, { "docid": "acdcdae606f9c046aab912075d4ec609", "text": "Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such data streams. How should systems valuate and negotiate access to private information, for example in return for monetary incentives? How should they optimally choose the participants from a large population of strategic users with privacy concerns, and compensate them for information shared? In this paper, we address these questions and present a novel mechanism, SEQTGREEDY, for budgeted recruitment of participants in community sensing. We first show that privacy tradeoffs in community sensing can be cast as an adaptive submodular optimization problem. We then design a budget feasible, incentive compatible (truthful) mechanism for adaptive submodular maximization, which achieves near-optimal utility for a large class of sensing applications. This mechanism is general, and of independent interest. We demonstrate the effectiveness of our approach in a case study of air quality monitoring, using data collected from the Mechanical Turk platform. Compared to the state of the art, our approach achieves up to 30% reduction in cost in order to achieve a desired level of utility.", "title": "" }, { "docid": "a37498a6fbaabd220bad848d440e889b", "text": "Deep multitask learning boosts performance by sharing learned structure across related tasks. This paper adapts ideas from deep multitask learning to the setting where only a single task is available. The method is formalized as pseudo-task augmentation, in which models are trained with multiple decoders for each task. Pseudo-tasks simulate the effect of training towards closelyrelated tasks drawn from the same universe. In a suite of experiments, pseudo-task augmentation improves performance on single-task learning problems. When combined with multitask learning, further improvements are achieved, including state-of-the-art performance on the CelebA dataset, showing that pseudo-task augmentation and multitask learning have complementary value. All in all, pseudo-task augmentation is a broadly applicable and efficient way to boost performance in deep learning systems.", "title": "" }, { "docid": "f1975699674d03e54e9610442d30f060", "text": "Bodybuilding competitions are becoming increasingly popular. Competitors are judged on their aesthetic appearance and usually exhibit a high level of muscularity and symmetry and low levels of body fat. Commonly used techniques to improve physique during the preparation phase before competitions include dehydration, periods of prolonged fasting, severe caloric restriction, excessive cardiovascular exercise and inappropriate use of diuretics and anabolic steroids. In contrast, this case study documents a structured nutrition and conditioning intervention followed by a 21 year-old amateur bodybuilding competitor to improve body composition, resting and exercise fat oxidation, and muscular strength that does not involve use of any of the above mentioned methods. Over a 14-week period, the Athlete was provided with a scientifically designed nutrition and conditioning plan that encouraged him to (i) consume a variety of foods; (ii) not neglect any macronutrient groups; (iii) exercise regularly but not excessively and; (iv) incorporate rest days into his conditioning regime. This strategy resulted in a body mass loss of 11.7 kg's, corresponding to a 6.7 kg reduction in fat mass and a 5.0 kg reduction in fat-free mass. Resting metabolic rate decreased from 1993 kcal/d to 1814 kcal/d, whereas resting fat oxidation increased from 0.04 g/min to 0.06 g/min. His capacity to oxidize fat during exercise increased more than two-fold from 0.24 g/min to 0.59 g/min, while there was a near 3-fold increase in the corresponding exercise intensity that elicited the maximal rate of fat oxidation; 21% V̇O2max to 60% V̇O2max. Hamstring concentric peak torque decreased (1.7 to 1.5 Nm/kg), whereas hamstring eccentric (2.0 Nm/kg to 2.9 Nm/kg), quadriceps concentric (3.4 Nm/kg to 3.7 Nm/kg) and quadriceps eccentric (4.9 Nm/kg to 5.7 Nm/kg) peak torque all increased. Psychological mood-state (BRUMS scale) was not negatively influenced by the intervention and all values relating to the Athlete's mood-state remained below average over the course of study. This intervention shows that a structured and scientifically supported nutrition strategy can be implemented to improve parameters relevant to bodybuilding competition and importantly the health of competitors, therefore questioning the conventional practices of bodybuilding preparation.", "title": "" }, { "docid": "0c487b9609add0666915411b8b56ba61", "text": "In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only \"Enjoyment\" and \"Appearance\" factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising.", "title": "" }, { "docid": "bd0e9da77d26116c629c7c8c259013f9", "text": "In the appstore-centric ecosystem, app developers have an urgent requirement to optimize their release strategy to maximize user adoption of their apps. To address this problem, we introduce an approach to assisting developers to select the proper release opportunity based on the purpose of the update and current condition of the app. Before that, we propose the update interval to characterize release patterns of apps, and find significance of the updates through empirical analysis. We mined the release-history data of 17,820 apps from 33 categories in Google Play, over a period of 105 days. With 41,028 releases identified from these apps, we reveal important characteristics of update intervals and how these factors can influence update effects. We suggest developers to synthetically consider app ranking, rating trend, and update purpose in addition to the timing of releasing an app version. We propose a Multinomial Naive Bayes model to help decide an optimal release opportunity to gain better user adoption.", "title": "" }, { "docid": "827c3bdcec80e89bbaee27cfbd6b5e74", "text": "The assessment of current sexual behavior (fantasies, urges, and activities) and sexual preoccupation (measured in min/day) associated with both conventional (i.e., adult relationship-associated) or unconventional (paraphilia and paraphilia-related) sexual behavior were ascertained from a sample of 120 consecutively evaluated males with paraphilias (PA; n = 88, including sex offender paraphiliacs; n = 60) and paraphilia-related disorders (PRD; n = 32). In addition, an assessment of hypersexual desire, defined as the highest sustained period (at least 6 months minimum duration) of persistently enacted sexual behavior (total sexual outlet/week [TSO] after age 15) was assessed. In almost all measures, the PA and PRD groups were not statistically significantly different. The average PA or PRD reported a mean hypersexual TSO of 11.7 +/- 7.3, a mean age of 21.6 +/- 7.1 years at onset of peak hypersexual behavior, and a mean duration of 6.2 +/- 7.6 years of hypersexual TSO. When the sample was stratified into three subgroups on the basis of the lifetime number of PAs + PRDs as a proxy measure of the severity of sexual impulsivity, the \"high\" group, with at least 5 lifetime PAs and PRDs, consisted of all paraphilic males, predominantly sex offenders, who self-reported the highest hypersexual desire (14.3 +/- 7.9), the highest current TSO/week (9.9 +/- 8.1), the most current sexual preoccupation (2-4 hr/day), and the highest likelihood of incarceration secondary to paraphilic sex-offending behavior. Although hypersexual desire, a quantitative measure of enacted sexual behaviours, may be a meaningful construct for clinically derived samples, the incidence and prevalence of hypersexual desire in community samples of males with paraphilias and paraphilia-related disorders is unknown.", "title": "" } ]
scidocsrr
2cc337dd5ddbf1d672bcf882343ded07
Ratings for emotion film clips.
[ { "docid": "93d8b8afe93d10e54bf4a27ba3b58220", "text": "Researchers interested in emotion have long struggled with the problem of how to elicit emotional responses in the laboratory. In this article, we summarise five years of work to develop a set of films that reliably elicit each of eight emotional states (amusement, anger, contentment, disgust, fear, neutral, sadness, and surprise). After evaluating over 250 films, we showed selected film clips to an ethnically diverse sample of 494 English-speaking subjects. We then chose the two best films for each of the eight target emotions based on the intensity and discreteness of subjects' responses to each film. We found that our set of 16 films successfully elicited amusement, anger, contentment. disgust, sadness, surprise, a relatively neutral state, and, to a lesser extent, fear. We compare this set of films with another set recently described by Philippot (1993), and indicate that detailed instructions for creating our set of film stimuli will be provided on request.", "title": "" } ]
[ { "docid": "cd36a4e57a446e25ae612cdc31f6293e", "text": "Privacy and security concerns can prevent sharing of data, derailing data mining projects. Distributed knowledge discovery, if done correctly, can alleviate this problem. The key is to obtain valid results, while providing guarantees on the (non)disclosure of data. We present a method for k-means clustering when different sites contain different attributes for a common set of entities. Each site learns the cluster of each entity, but learns nothing about the attributes at other sites.", "title": "" }, { "docid": "a8477be508fab67456c5f6b61d3642b5", "text": "Although three-phase permanent magnet (PM) motors are quite common in industry, multi-phase PM motors are used in special applications where high power and redundancy are required. Multi-phase PM motors offer higher torque/power density than conventional three-phase PM motors. In this paper, a novel multi-phase consequent pole PM (CPPM) synchronous motor is proposed. The constant power–speed range of the proposed motor is quite wide as opposed to conventional PM motors. The design and the detailed finite-element analysis of the proposed nine-phase CPPM motor and performance comparison with a nine-phase surface mounted PM motor are completed to illustrate the benefits of the proposed motor.", "title": "" }, { "docid": "c664918193470b20af2ce2ecf0c8e1c7", "text": "The exceptional electronic properties of graphene, with its charge carriers mimicking relativistic quantum particles and its formidable potential in various applications, have ensured a rapid growth of interest in this new material. We report on electron transport in quantum dot devices carved entirely from graphene. At large sizes (>100 nanometers), they behave as conventional single-electron transistors, exhibiting periodic Coulomb blockade peaks. For quantum dots smaller than 100 nanometers, the peaks become strongly nonperiodic, indicating a major contribution of quantum confinement. Random peak spacing and its statistics are well described by the theory of chaotic neutrino billiards. Short constrictions of only a few nanometers in width remain conductive and reveal a confinement gap of up to 0.5 electron volt, demonstrating the possibility of molecular-scale electronics based on graphene.", "title": "" }, { "docid": "2e6c14ef1fe5c643a19e8c0e759e086b", "text": "Deafblind people have a severe degree of combined visual and auditory impairment resulting in problems with communication, (access to) information and mobility. Moreover, in order to interact with other people, most of them need the constant presence of a caregiver who plays the role of an interpreter with an external world organized for hearing and sighted people. As a result, they usually live behind an invisible wall of silence, in a unique and inexplicable condition of isolation.\n In this paper, we describe DB-HAND, an assistive hardware/software system that supports users to autonomously interact with the environment, to establish social relationships and to gain access to information sources without an assistant. DB-HAND consists of an input/output wearable peripheral (a glove equipped with sensors and actuators) that acts as a natural interface since it enables communication using a language that is easily learned by a deafblind: Malossi method. Interaction with DB-HAND is managed by a software environment, whose purpose is to translate text into sequences of tactile stimuli (and vice-versa), to execute commands and to deliver messages to other users. It also provides multi-modal feedback on several standard output devices to support interaction with the hearing and the sighted people.", "title": "" }, { "docid": "114492ca2cef179a39b5ad5edbc80de0", "text": "We review early and recent psychological theories of dehumanization and survey the burgeoning empirical literature, focusing on six fundamental questions. First, we examine how people are dehumanized, exploring the range of ways in which perceptions of lesser humanness have been conceptualized and demonstrated. Second, we review who is dehumanized, examining the social targets that have been shown to be denied humanness and commonalities among them. Third, we investigate who dehumanizes, notably the personality, ideological, and other individual differences that increase the propensity to see others as less than human. Fourth, we explore when people dehumanize, focusing on transient situational and motivational factors that promote dehumanizing perceptions. Fifth, we examine the consequences of dehumanization, emphasizing its implications for prosocial and antisocial behavior and for moral judgment. Finally, we ask what can be done to reduce dehumanization. We conclude with a discussion of limitations of current scholarship and directions for future research.", "title": "" }, { "docid": "3394eb51b71e5def4e4637963da347ab", "text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.", "title": "" }, { "docid": "b34216c34f32336db67f76f1c94c255b", "text": "Exploration is still one of the crucial problems in reinforcement learning, especially for agents acting in safety-critical situations. We propose a new directed exploration method, based on a notion of state controlability. Intuitively, if an agent wants to stay safe, it should seek out states where the effects of its actions are easier to predict; we call such states more controllable. Our main contribution is a new notion of controlability, computed directly from temporaldifference errors. Unlike other existing approaches of this type, our method scales linearly with the number of state features, and is directly applicable to function approximation. Our method converges to correct values in the policy evaluation setting. We also demonstrate significantly faster learning when this exploration strategy is used in large control problems.", "title": "" }, { "docid": "76e6c05e41c4e6d3c70c8fedec5c323b", "text": "Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing. In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.", "title": "" }, { "docid": "ae536a72dfba1e7eff57989c3f94ae3e", "text": "Policymakers are often interested in estimating how policy interventions affect the outcomes of those most in need of help. This concern has motivated the practice of disaggregating experimental results by groups constructed on the basis of an index of baseline characteristics that predicts the values of individual outcomes without the treatment. This paper shows that substantial biases may arise in practice if the index is estimated by regressing the outcome variable on baseline characteristics for the full sample of experimental controls. We propose alternative methods that correct this bias and show that they behave well in realistic scenarios.", "title": "" }, { "docid": "7e2bbd260e58d84a4be8b721cdf51244", "text": "Obesity is characterised by altered gut microbiota, low-grade inflammation and increased endocannabinoid (eCB) system tone; however, a clear connection between gut microbiota and eCB signalling has yet to be confirmed. Here, we report that gut microbiota modulate the intestinal eCB system tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels. The impact of the increased plasma LPS levels and eCB system tone found in obesity on adipose tissue metabolism (e.g. differentiation and lipogenesis) remains unknown. By interfering with the eCB system using CB(1) agonist and antagonist in lean and obese mouse models, we found that the eCB system controls gut permeability and adipogenesis. We also show that LPS acts as a master switch to control adipose tissue metabolism both in vivo and ex vivo by blocking cannabinoid-driven adipogenesis. These data indicate that gut microbiota determine adipose tissue physiology through LPS-eCB system regulatory loops and may have critical functions in adipose tissue plasticity during obesity.", "title": "" }, { "docid": "dfa62c69b1ab26e7e160100b69794674", "text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.", "title": "" }, { "docid": "f84016570e5f9c7de7a452e88e0edb14", "text": "Requirements of enterprise applications have become much more demanding. They require the computation of complex reports on transactional data while thousands of users may read or update records of the same data. The goal of the SAP HANA database is the integration of transactional and analytical workload within the same database management system. To achieve this, a columnar engine exploits modern hardware (multiple CPU cores, large main memory, and caches), compression of database content, maximum parallelization in the database kernel, and database extensions required by enterprise applications, e.g., specialized data structures for hierarchies or support for domain specific languages. In this paper we highlight the architectural concepts employed in the SAP HANA database. We also report on insights gathered with the SAP HANA database in real-world enterprise application scenarios.", "title": "" }, { "docid": "a3f6f2e6415267bb5b9ac92c3c77e872", "text": "In recent times, the use of separable convolutions in deep convolutional neural network architectures has been explored. Several researchers, most notably and have used separable convolutions in their deep architectures and have demonstrated state of the art or close to state of the art performance. However, the underlying mechanism of action of separable convolutions is still not fully understood. Although, their mathematical definition is well understood as a depth-wise convolution followed by a point-wise convolution, “deeper” interpretations (such as the “extreme Inception”) hypothesis have failed to provide a thorough explanation of their efficacy. In this paper, we propose a hybrid interpretation that we believe is a better model for explaining the efficacy of separable convolutions.", "title": "" }, { "docid": "184da4d4589a3a9dc1f339042e6bc674", "text": "Ocular dominance plasticity has long served as a successful model for examining how cortical circuits are shaped by experience. In this paradigm, altered retinal activity caused by unilateral eye-lid closure leads to dramatic shifts in the binocular response properties of neurons in the visual cortex. Much of the recent progress in identifying the cellular and molecular mechanisms underlying ocular dominance plasticity has been achieved by using the mouse as a model system. In this species, monocular deprivation initiated in adulthood also causes robust ocular dominance shifts. Research on ocular dominance plasticity in the mouse is starting to provide insight into which factors mediate and influence cortical plasticity in juvenile and adult animals.", "title": "" }, { "docid": "f87fea9cd76d1545c34f8e813347146e", "text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.", "title": "" }, { "docid": "d566e25ed5ff6e479887a350572cadad", "text": "Lorentz reciprocity is a fundamental characteristic of the vast majority of electronic and photonic structures. However, non-reciprocal components such as isolators, circulators and gyrators enable new applications ranging from radio frequencies to optical frequencies, including full-duplex wireless communication and on-chip all-optical information processing. Such components today dominantly rely on the phenomenon of Faraday rotation in magneto-optic materials. However, they are typically bulky, expensive and not suitable for insertion in a conventional integrated circuit. Here we demonstrate magnetic-free linear passive non-reciprocity based on the concept of staggered commutation. Commutation is a form of parametric modulation with very high modulation ratio. We observe that staggered commutation enables time-reversal symmetry breaking within very small dimensions (λ/1,250 × λ/1,250 in our device), resulting in a miniature radio-frequency circulator that exhibits reduced implementation complexity, very low loss, strong non-reciprocity, significantly enhanced linearity and real-time reconfigurability, and is integrated in a conventional complementary metal-oxide-semiconductor integrated circuit for the first time.", "title": "" }, { "docid": "8eafcf061e2b9cda4cd02de9bf9a31d1", "text": "Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.", "title": "" }, { "docid": "7d3449a6ea821d214f7d961d4c85c6a4", "text": "Collisions between automated moving equipment and human workers in job sites are one of the main sources of fatalities and accidents during the execution of construction projects. In this paper, we present a methodology to identify and assess project plans in terms of hazards before their execution. Our methodology has the following steps: 1) several potential plans are extracted from an initial activity graph; 2) plans are translated from a high-level activity graph to a discrete-event simulation model; 3) trajectories and safety policies are generated that avoid static and moving obstacles using existing motion planning algorithms; 4) safety scores and risk-based heatmaps are calculated based on the trajectories of moving equipment; and 5) managerial implications are provided to select an acceptable plan with the aid of a sensitivity analysis of different factors (cost, resources, and deadlines) that affect the safety of a plan. Finally, we present illustrative case study examples to demonstrate the usefulness of our model.Note to Practitioners—Currently, construction project planning does not explicitly consider safety due to a lack of automated tools that can identify a plan’s safety level before its execution. This paper proposes an automated construction safety assessment tool which is able to evaluate the alternate construction plans and help to choose considering safety, cost, and deadlines. Our methodology uses discrete-event modeling along with motion planning to simulate the motions of workers and equipment, which account for most of the hazards in construction sites. Our method is capable of generating safe motion trajectories and coordination policies for both humans and machines to minimize the number of collisions. We also provide safety heatmaps as a spatiotemporal visual display of construction site to identify risky zones inside the environment throughout the entire timeline of the project. Additionally, a detailed sensitivity analysis helps to choose among plans in terms of safety, cost, and deadlines.", "title": "" }, { "docid": "c75b309fc89e75cb7b6fa415175aa192", "text": "Tweets have become an increasingly popular source of fresh information. We investigate the task of Nominal Semantic Role Labeling (NSRL) for tweets, which aims to identify predicate-argument structures defined by nominals in tweets. Studies of this task can help fine-grained information extraction and retrieval from tweets. There are two main challenges in this task: 1) The lack of information in a single tweet, rooted in the short and noisy nature of tweets; and 2) recovery of implicit arguments. We propose jointly conducting NSRL on multiple similar tweets using a graphical model, leveraging the redundancy in tweets to tackle these challenges. Extensive evaluations on a human annotated data set demonstrate that our method outperforms two baselines with an absolute gain of 2.7% in F", "title": "" }, { "docid": "1512f35cd69a456a72f981577cfb068b", "text": "Recurrence and progression to higher grade lesions are key biological events and characteristic behaviors in the evolution process of glioma. Malignant astrocytic tumors such as glioblastoma (GBM) are the most lethal intracranial tumors. However, the clinical practicability and significance of molecular parameters for the diagnostic and prognostic prediction of astrocytic tumors is still limited. In this study, we detected ATRX, IDH1-R132H and Ki-67 by immunohistochemistry and observed the association of IDH1-R132H with ATRX and Ki-67 expression. There was a strong association between ATRX loss and IDH1-R132H (p<0.0001). However, Ki-67 high expression restricted in the tumors with IDH1-R132H negative (p=0.0129). Patients with IDH1-R132H positive or ATRX loss astrocytic tumors had a longer progressive- free survival (p<0.0001, p=0.0044, respectively). High Ki-67 expression was associated with shorter PFS in patients with astrocytic tumors (p=0.002). Then we characterized three prognostic subgroups of astrocytic tumors (referred to as A1, A2 and A3). The new model demonstrated a remarkable separation of the progression interval in the three molecular subgroups and the distribution of patients' age in the A1-A2-A3 model was also significant different. This model will aid predicting the overall survival and progressive time of astrocytic tumors' patients.", "title": "" } ]
scidocsrr
e743968cd2b440c68ff45c59ed11c1f6
DropFilter: A Novel Regularization Method for Learning Convolutional Neural Networks
[ { "docid": "f4cd7a70a257aea595bf4a26142127ff", "text": "Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient `position refinement' model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model [21] to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC [20] dataset and outperforms all existing approaches on the MPII-human-pose dataset [1].", "title": "" }, { "docid": "6a74a4d52d468b823a8a9e1a123864bd", "text": "In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person reidentification. Code is available at: https://github. com/zhunzhong07/Random-Erasing.", "title": "" }, { "docid": "af25bc1266003202d3448c098628aee8", "text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.", "title": "" } ]
[ { "docid": "ef66627d34d684e41bc7541b18dfd687", "text": "This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an assignment map, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark.", "title": "" }, { "docid": "6ec3c98e78e78303a0dc0068ab90a17d", "text": "INTRODUCTION\nIn this study we report a large series of patients with unilateral winged scapula (WS), with special attention to long thoracic nerve (LTN) palsy.\n\n\nMETHODS\nClinical and electrodiagnostic data were collected from 128 patients over a 25-year period.\n\n\nRESULTS\nCauses of unilateral WS were LTN palsy (n = 70), spinal accessory nerve (SAN) palsy (n = 39), both LTN and SAN palsy (n = 5), facioscapulohumeral dystrophy (FSH) (n = 5), orthopedic causes (n = 11), voluntary WS (n = 6), and no definite cause (n = 2). LTN palsy was related to neuralgic amyotrophy (NA) in 61 patients and involved the right side in 62 patients.\n\n\nDISCUSSION\nClinical data allow for identifying 2 main clinical patterns for LTN and SAN palsy. Electrodiagnostic examination should consider bilateral nerve conduction studies of the LTN and SAN, and needle electromyography of their target muscles. LTN palsy is the most frequent cause of unilateral WS and is usually related to NA. Voluntary WS and FSH must be considered in young patients. Muscle Nerve 57: 913-920, 2018.", "title": "" }, { "docid": "5eed0c6f114382d868cd841c7b5d9986", "text": "Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance.", "title": "" }, { "docid": "c713e4a5536c065d8d40c1e2482557bc", "text": "In this paper, we propose a robust and accurate method to detect fingertips of hand palm with a down-looking camera mounted on an eyeglass for the utilization of hand gestures for user interaction between human and computers. To ensure consistent performance under unconstrained environments, we propose a novel method to precisely locate fingertips by combing both statistical information of palm edge distribution and structure information of convex null analysis on palm contour. Briefly, first SVM (support vector machine) with a statistical nine-bin based HOG (histogram of oriented gradient) features is introduced for robust hand detection from video stream. Then, binary image regions are segmented out by an adaptive Cg-Cr model on detected hands. With the prior information of hand contour, it takes a global optimization approach of convex hull analysis to locate hand fingertip. The experimental results have demonstrated that the proposed approach performs well because it can well detect all hand fingertips even under some extreme environments.", "title": "" }, { "docid": "b299b939b73e1af0167519c4090dd639", "text": "Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model.\n We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.", "title": "" }, { "docid": "f54631ac73d42af0ccb2811d483fe8c2", "text": "Understanding large, structured documents like scholarly articles, requests for proposals or business reports is a complex and difficult task. It involves discovering a document’s overall purpose and subject(s), understanding the function and meaning of its sections and subsections, and extracting low level entities and facts about them. In this research, we present a deep learning based document ontology to capture the general purpose semantic structure and domain specific semantic concepts from a large number of academic articles and business documents. The ontology is able to describe different functional parts of a document, which can be used to enhance semantic indexing for a better understanding by human beings and machines. We evaluate our models through extensive experiments on datasets of scholarly articles from arxiv and Request for Proposal documents.", "title": "" }, { "docid": "acd5879d3d2746e4c6036691e4099f7a", "text": "Alkamides are fatty acid amides of wide distribution in plants, structurally related to N-acyl-L-homoserine lactones (AHLs) from Gram-negative bacteria and to N- acylethanolamines (NAEs) from plants and mammals. Global analysis of gene expression changes in Arabidopsis thaliana in response to N-isobutyl decanamide, the most highly active alkamide identified to date, revealed an overrepresentation of defense-responsive transcriptional networks. In particular, genes encoding enzymes for jasmonic acid (JA) biosynthesis increased their expression, which occurred in parallel with JA, nitric oxide (NO) and H₂O₂ accumulation. The activity of the alkamide to confer resistance against the necrotizing fungus Botrytis cinerea was tested by inoculating Arabidopsis detached leaves with conidiospores and evaluating disease symptoms and fungal proliferation. N-isobutyl decanamide application significantly reduced necrosis caused by the pathogen and inhibited fungal proliferation. Arabidopsis mutants jar1 and coi1 altered in JA signaling and a MAP kinase mutant (mpk6), unlike salicylic acid- (SA) related mutant eds16/sid2-1, were unable to defend from fungal attack even when N-isobutyl decanamide was supplied, indicating that alkamides could modulate some necrotrophic-associated defense responses through JA-dependent and MPK6-regulated signaling pathways. Our results suggest a role of alkamides in plant immunity induction.", "title": "" }, { "docid": "2f122217b79d258e2001bb16d639b6e4", "text": "Electrochemical Impedance Spectroscopy (EIS) has been recently proposed as a simple non-invasive technique to monitor the amount of fat and liquids contained inside the human body. While the technique capabilities are still questioned, a simple and low cost device capable of performing this kind of measurements would help testing it on many patients with minimal effort. This paper describes an extremely low cost implementation of an EIS system suitable for medical applications that is based on a simple commercial Arduino Board whose cost is below 50$. The circuit takes advantage of the ADC and DAC made available by the microcontroller of the Arduino boards and employs a logarithmic amplifier to extend the impedance measuring range to 6 decades without using complex programmable gain amplifiers. This way the device can use electrodes with sizes in the range of 1 cm2 to 40 cm2. The EIS can be measured in the frequency range of 1 Hz to 100 kHz and in the impedance range of 1 kΩ to 1 GΩ. The instrument automatically compensates the DC voltage due to the skin/electrode contact and runs on batteries. The EIS traces can be stored inside the device and transferred to the PC with a wireless link avoiding safety issues.", "title": "" }, { "docid": "d0e7bc4dab94eae7148ec0316918cf69", "text": "The exploitation of syntactic structures and semantic background knowledge has always been an appealing subject in the context of text retrieval and information management. The usefulness of this kind of information has been shown most prominently in highly specialized tasks, such as classification in Question Answering (QA) scenarios. So far, however, additional syntactic or semantic information has been used only individually. In this paper, we propose a principled approach for jointly exploiting both types of information. We propose a new type of kernel, the Semantic Syntactic Tree Kernel (SSTK), which incorporates linguistic structures, e.g. syntactic dependencies, and semantic background knowledge, e.g. term similarity based on WordNet, to automatically learn question categories in QA. We show the power of this approach in a series of experiments with a well known Question Classification dataset.", "title": "" }, { "docid": "6d552edc0d60470ce942b9d57b6341e3", "text": "A rich element of cooperative games are mechanics that communicate. Unlike automated awareness cues and synchronous verbal communication, cooperative communication mechanics enable players to share information and direct action by engaging with game systems. These include both explicitly communicative mechanics, such as built-in pings that direct teammates' attention to specific locations, and emergent communicative mechanics, where players develop their own conventions about the meaning of in-game activities, like jumping to get attention. We use a grounded theory approach with 40 digital games to identify and classify the types of cooperative communication mechanics game designers might use to enable cooperative play. We provide details on the classification scheme and offer a discussion on the implications of cooperative communication mechanics.", "title": "" }, { "docid": "c3eca8a83161a19c77406dc6393aa5b0", "text": "Cell division in eukaryotes requires extensive architectural changes of the nuclear envelope (NE) to ensure that segregated DNA is finally enclosed in a single cell nucleus in each daughter cell. Higher eukaryotic cells have evolved 'open' mitosis, the most extreme mechanism to solve the problem of nuclear division, in which the NE is initially completely disassembled and then reassembled in coordination with DNA segregation. Recent progress in the field has now started to uncover mechanistic and molecular details that underlie the changes in NE reorganization during open mitosis. These studies reveal a tight interplay between NE components and the mitotic machinery.", "title": "" }, { "docid": "03d5c8627ec09e4332edfa6842b6fe44", "text": "In the same way businesses use big data to pursue profits, governments use it to promote the public good.", "title": "" }, { "docid": "c1f095252c6c64af9ceeb33e78318b82", "text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.", "title": "" }, { "docid": "d2abd2fbb54a307d652bacdf92234466", "text": "Aldehydes-induced toxicity has been implicated in many neurodegenerative diseases. Exposure to reactive aldehydes from (1) alcohol and food metabolism; (2) environmental pollutants, including car, factory exhausts, smog, pesticides, herbicides; (3) metabolism of neurotransmitters, amino acids and (4) lipid peroxidation of biological membrane from excessive ROS, all contribute to 'aldehydic load' that has been linked to the pathology of neurodegenerative diseases. In particular, the α, β-unsaturated aldehydes derived from lipid peroxidation, 4-hydroxynonenal (4-HNE), DOPAL (MAO product of dopamine), malondialdehyde, acrolein and acetaldehyde, all readily form chemical adductions with proteins, DNA and lipids, thus causing neurotoxicity. Mitochondrial aldehyde dehydrogenase 2 (ALDH 2) is a major aldehyde metabolizing enzyme that protects against deleterious aldehyde buildup in brain, a tissue that has a particularly high mitochondrial content. In this review, we highlight the deleterious effects of increased aldehydic load in the neuropathology of ischemic stroke, Alzheimer's disease and Parkinson's disease. We also discuss evidence for the association between ALDH2 deficiency, a common East Asianspecific mutation, and these neuropathologies. A novel class of small molecule aldehyde dehydrogenase activators (Aldas), represented by Alda-1, reduces neuronal cell death in models of ischemic stroke, Alzheimer's disease and Parkinson's disease. Together, these data suggest that reducing aldeydic load by enhancing the activity of aldehyde dehydrogenases, such as ALDH2, represents as a therapeutic strategy for neurodegenerative diseases.", "title": "" }, { "docid": "4b8823bffcc77968b7ac087579ab84c9", "text": "Numerous complains have been made by Android users who severely suffer from the sluggish response when interacting with their devices. However, very few studies have been conducted to understand the user-perceived latency or mitigate the UI-lagging problem. In this paper, we conduct the first systematic measurement study to quantify the user-perceived latency using typical interaction-intensive Android apps in running with and without background workloads. We reveal the insufficiency of Android system in ensuring the performance of foreground apps and therefore design a new system to address the insufficiency accordingly. We develop a lightweight tracker to accurately identify all delay-critical threads that contribute to the slow response of user interactions. We then build a resource manager that can efficiently schedule various system resources including CPU, I/O, and GPU, for optimizing the performance of these threads. We implement the proposed system on commercial smartphones and conduct comprehensive experiments to evaluate our implementation. Evaluation results show that our system is able to significantly reduce the user-perceived latency of foreground apps in running with aggressive background workloads, up to 10x, while incurring negligible system overhead of less than 3.1 percent CPU and 7 MB memory.", "title": "" }, { "docid": "b425265606966c9490519ab1d49f8141", "text": "Any books that you read, no matter how you got the sentences that have been read from the books, surely they will give you goodness. But, we will show you one of recommendation of the book that you need to read. This web usability a user centered design approach is what we surely mean. We will show you the reasonable reasons why you need to read this book. This book is a kind of precious book written by an experienced author.", "title": "" }, { "docid": "525b6488420815084bb278d8c76a4229", "text": "Configuration management tools help administrators in defining and automating system configurations. With cloud computing, host numbers are likely to grow. IaaS (infrastructure as a service) offerings with pay-per-use pricing models make fast and effective deployment of applications necessary. Configuration management tools address both challenges. In this paper, the existing research on this topic is reviewed comprehensively. Readers are provided with a descriptive analysis of the published literature as well as with an analysis of the content of the respective research works. The paper serves as an overview for researchers who are new to the topic. Furthermore, it serves to identify work related to an intended research field and identifies research gaps. Practitioners are provided with a means to identify solutions to their organizational problems.", "title": "" }, { "docid": "531a7417bd66ff0fdd7fb35c7d6d8559", "text": "G. R. White University of Sussex, Brighton, UK Abstract In order to design new methodologies for evaluating the user experience of video games, it is imperative to initially understand two core issues. Firstly, how are video games developed at present, including components such as processes, timescales and staff roles, and secondly, how do studios design and evaluate the user experience. This chapter will discuss the video game development process and the practices that studios currently use to achieve the best possible user experience. It will present four case studies from game developers Disney Interactive (Black Rock Studio), Relentless, Zoe Mode, and HandCircus, each detailing their game development process and also how this integrates with the user experience evaluation. The case studies focus on different game genres, platforms, and target user groups, ensuring that this chapter represents a balanced view of current practices in evaluating user experience during the game development process.", "title": "" }, { "docid": "a4cd7466f64258ec98ccca5fda26dbb4", "text": "With the increase in the number of restaurants and population of restaurant-goers, a need to enhance the working of hospitality industry is felt. This research work aims for this betterment of hospitality industry by incorporating technology. A recent survey on the utilisation of technology in hospitality industries showcased that various applications based on wireless technologies are already in use enabling partial automation of the food ordering process. In this paper, we discuss about the design and implementation of digital dining in restaurants using android technology. This system is a basic dynamic database utility system which fetches all information from a centralized database. The tablet at the customer table contains the android application with all the restaurant and menu details. The customer tablet, kitchen display and the cashier counter connects directly with each other through Wi-Fi. This wireless application is user-friendly, improves efficiency and accuracy for restaurants by saving time, reduces human errors and provides customer feedback. This system successfully overcomes the drawbacks in earlier automated food ordering systems and is less expensive as it requires a onetime investment for gadgets.", "title": "" }, { "docid": "5bb6e93244e976725bc9663c0afe8136", "text": "Video streaming platforms like Twitch.tv or YouNow have attracted the attention of both users and researchers in the last few years. Users increasingly adopt these platforms to share user-generated videos while researchers study their usage patterns to learn how to provide better and new services.", "title": "" } ]
scidocsrr
a595ae1825df2ada0f401917e874fe67
Multi-Task Vehicle Detection With Region-of-Interest Voting
[ { "docid": "4d99090b874776b89092f63f21c8ea93", "text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.", "title": "" }, { "docid": "34b7073f947888694053cb421544cb37", "text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "title": "" } ]
[ { "docid": "f4cc2848713439b162dc5fc255c336d2", "text": "We consider the problem of waveform design for multiple input/multiple output (MIMO) radars, where the transmit waveforms are adjusted based on target and clutter statistics. A model for the radar returns which incorporates the transmit waveforms is developed. The target detection problem is formulated for that model. Optimal and suboptimal algorithms are derived for designing the transmit waveforms under different assumptions regarding the statistical information available to the detector. The performance of these algorithms is illustrated by computer simulation.", "title": "" }, { "docid": "dd64ac591acfacb6ea514af3f104d0aa", "text": "FluMist influenza A vaccine strains contain the PB1, PB2, PA, NP, M, and NS gene segments of ca A/AA/6/60, the master donor virus-A strain. These gene segments impart the characteristic cold-adapted (ca), attenuated (att), and temperature-sensitive (ts) phenotypes to the vaccine strains. A plasmid-based reverse genetics system was used to create a series of recombinant hybrids between the isogenic non-ts wt A/Ann Arbor/6/60 and MDV-A strains to characterize the genetic basis of the ts phenotype, a critical, genetically stable, biological trait that contributes to the attenuation and safety of FluMist vaccines. PB1, PB2, and NP derived from MDV-A each expressed determinants of temperature sensitivity and the combination of all three gene segments was synergistic, resulting in expression of the characteristic MDV-A ts phenotype. Site-directed mutagenesis analysis mapped the MDV-A ts phenotype to the following four major loci: PB1(1195) (K391E), PB1(1766) (E581G), PB2(821) (N265S), and NP(146) (D34G). In addition, PB1(2005) (A661T) also contributed to the ts phenotype. The identification of multiple genetic loci that control the MDV-A ts phenotype provides a molecular basis for the observed genetic stability of FluMist vaccines.", "title": "" }, { "docid": "b7a04d56d6d06a0d89f6113c3ab639a8", "text": "Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents’ play.", "title": "" }, { "docid": "e85b5115a489835bc58a48eaa727447a", "text": "State-of-the art machine learning methods such as deep learning rely on large sets of hand-labeled training data. Collecting training data is prohibitively slow and expensive, especially when technical domain expertise is required; even the largest technology companies struggle with this challenge. We address this critical bottleneck with Snorkel, a new system for quickly creating, managing, and modeling training sets. Snorkel enables users to generate large volumes of training data by writing labeling functions, which are simple functions that express heuristics and other weak supervision strategies. These user-authored labeling functions may have low accuracies and may overlap and conflict, but Snorkel automatically learns their accuracies and synthesizes their output labels. Experiments and theory show that surprisingly, by modeling the labeling process in this way, we can train high-accuracy machine learning models even using potentially lower-accuracy inputs. Snorkel is currently used in production at top technology and consulting companies, and used by researchers to extract information from electronic health records, after-action combat reports, and the scientific literature. In this demonstration, we focus on the challenging task of information extraction, a common application of Snorkel in practice. Using the task of extracting corporate employment relationships from news articles, we will demonstrate and build intuition for a radically different way of developing machine learning systems which allows us to effectively bypass the bottleneck of hand-labeling training data.", "title": "" }, { "docid": "3f23f5452c53ae5fcc23d95acdcdafd8", "text": "Metamorphism is a technique that mutates the binary code using different obfuscations and never keeps the same sequence of opcodes in the memory. This stealth technique provides the capability to a malware for evading detection by simple signature-based (such as instruction sequences, byte sequences and string signatures) anti-malware programs. In this paper, we present a new scheme named Annotated Control Flow Graph (ACFG) to efficiently detect such kinds of malware. ACFG is built by annotating CFG of a binary program and is used for graph and pattern matching to analyse and detect metamorphic malware. We also optimize the runtime of malware detection through parallelization and ACFG reduction, maintaining the same accuracy (without ACFG reduction) for malware detection. ACFG proposed in this paper: (i) captures the control flow semantics of a program; (ii) provides a faster matching of ACFGs and can handle malware with smaller CFGs, compared with other such techniques, without compromising the accuracy; (iii) contains more information and hence provides more accuracy than a CFG. Experimental evaluation of the proposed scheme using an existing dataset yields malware detection rate of 98.9% and false positive rate of 4.5%.", "title": "" }, { "docid": "ab132902ce21c35d4b5befb8ff2898b5", "text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.", "title": "" }, { "docid": "b0709248d08564b7d1a1f23243aa0946", "text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "596e9fe6d5908f277e534a76490702b0", "text": "A prototype modular ultrawideband wavelength- scaled array of flared notches has been designed, built, measured and validated with full-wave modeling tools. Wavelength-scaled arrays operate over ultrawide bandwidths with significantly-reduced element counts, maintaining a relatively-constant beam size by utilizing phased-array radiators of different size. The prototype phased array presented here is designed to operate over an 8:1 bandwidth (1-8 GHz), demonstrating a 12-degree beam capacity at 2 GHz, 4 GHz, and 8 GHz. The architecture achieves a reduction in element count by a factor of 6.4-only 160 elements per polarization as compared to a conventional 1024-element phased array of the same aperture size-at the cost of reduced beamwidth capacity in the higher frequency range. Performance metrics (active VSWR and radiation characteristics) of the wavelength-scaled array are measured and validated against full-wave simulations. The technology is presented as a viable alternative to more expensive conventional ultrawideband arrays with dense uniform element layouts.", "title": "" }, { "docid": "27101c9dcb89149b68d3ad47b516db69", "text": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.", "title": "" }, { "docid": "396f0c39b5afbf6bee2f7168f23ecccb", "text": "This work describes a method for real-time motion detection using an active camera mounted on a padtilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation. Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pankilt angles between successive frames are as large as 3\".", "title": "" }, { "docid": "3a29bbe76a53c8284123019eba7e0342", "text": "Although von Ammon' first used the term blepharphimosis in 1841, it was Vignes2 in 1889 who first associated blepharophimosis with ptosis and epicanthus inversus. In 1921, Dimitry3 reported a family in which there were 21 affected subjects in five generations. He described them as having ptosis alone and did not specify any other features, although photographs in the report show that they probably had the full syndrome. Dimitry's pedigree was updated by Owens et a/ in 1960. The syndrome appeared in both sexes and was transmitted as a Mendelian dominant. In 1935, Usher5 reviewed the reported cases. By then, 26 pedigrees had been published with a total of 175 affected persons with transmission mainly through affected males. There was no consanguinity in any pedigree. In three pedigrees, parents who obviously carried the gene were unaffected. Well over 150 families have now been reported and there is no doubt about the autosomal dominant pattern of inheritance. However, like Usher,5 several authors have noted that transmission is mainly through affected males and less commonly through affected females.4 6 Reports by Moraine et al7 and Townes and Muechler8 have described families where all affected females were either infertile with primary or secondary amenorrhoea or had menstrual irregularity. Zlotogora et a/9 described one family and analysed 38 families reported previously. They proposed the existence of two types: type I, the more common type, in which the syndrome is transmitted by males only and affected females are infertile, and type II, which is transmitted by both affected females and males. There is male to male transmission in both types and both are inherited as an autosomal dominant trait. They found complete penetrance in type I and slightly reduced penetrance in type II.", "title": "" }, { "docid": "c392b0a382bd4617b966d400591f37ff", "text": "The availability of reliable electrical power supply for petrochemical industries is extremely important not only to maintain continuous production but also from the point of view of overall plant safety. The costs, caused by production losses, due to a partial or a complete blackout easily run up to millions of dollars per blackout. If a proper load-shedding scheme is implemented, it is possible to save a plant from such occurrences. A lack of electrical power can be caused by loss of generation capacity or disconnection from the public power company supply. The load-shedding system ensures the availability of electrical power to all essential and most critical loads in the plant. This is achieved by switching off nonessential loads in the case of a lack of power in the plant electrical network, or parts of the plant electrical network. This paper discusses a comprehensive load-shedding scheme for industrial applications with a background on a similar system implemented at large integrated complexes such as refineries and petrochemical plants.", "title": "" }, { "docid": "9c632dc4173b24d19ef21997ebcd0586", "text": "It is well known that anchor text plays a critical role in a variety of search tasks performed over hypertextual domains, including enterprise search, wiki search, and web search. It is common practice to enrich a document's standard textual representation with all of the anchor text associated with its incoming hyperlinks. However, this approach does not help match relevant pages with very few inlinks. In this paper, we propose a method for overcoming anchor text sparsity by enriching document representations with anchor text that has been aggregated across the hyperlink graph. This aggregation mechanism acts to smooth, or diffuse, anchor text within a domain. We rigorously evaluate our proposed approach on a large web search test collection. Our results show the approach significantly improves retrieval effectiveness, especially for longer, more difficult queries.", "title": "" }, { "docid": "9fcf513f9f8c7f3e00ae78b55618af8b", "text": "Graph analysis is becoming increasingly important in many research fields - biology, social sciences, data mining - and daily applications - path finding, product recommendation. Many different large-scale graph-processing systems have been proposed for different platforms. However, little effort has been placed on designing systems for hybrid CPU-GPU platforms.In this work, we present HyGraph, a novel graph-processing systems for hybrid platforms which delivers performance by using CPUs and GPUs concurrently. Its core feature is a specialized data structure which enables dynamic scheduling of jobs onto both the CPU and the GPUs, thus (1) supersedes the need for static workload distribution, (2) provides load balancing, and (3) minimizes inter-process communication overhead by overlapping computation and communication.Our preliminary results demonstrate that HyGraph outperforms CPU-only and GPU-only solutions, delivering close-to-optimal performance on the hybrid system. Moreover, it supports large-scale graphs which do not fit into GPU memory, and it is competitive against state-of-the-art systems.", "title": "" }, { "docid": "bde9e26746ddcc6e53f442a0e400a57e", "text": "Aljebreen, Mohammed, \"Implementing a dynamic scaling of web applications in a virtualized cloud computing environment\" (2013). Abstract Cloud computing is becoming more essential day by day. The allure of the cloud is the significant value and benefits that people gain from it, such as reduced costs, increased storage, flexibility, and more mobility. Flexibility is one of the major benefits that cloud computing can provide in terms of scaling up and down the infrastructure of a network. Once traffic has increased on one server within the network, a load balancer instance will route incoming requests to a healthy instance, which is less busy and less burdened. When the full complement of instances cannot handle any more requests, past research has been done by Chieu et. al. that presented a scaling algorithm to address a dynamic scalability of web applications on a virtualized cloud computing environment based on relevant indicators that can increase or decrease servers, as needed. In this project, I implemented the proposed algorithm, but based on CPU Utilization threshold. In addition, two tests were run exploring the capabilities of different metrics when faced with ideal or challenging conditions. The results did find a superior metric that was able to perform successfully under both tests. 3 Dedication I lovingly dedicate this thesis to my gracious and devoted mother for her unwavering love and for always believing in me. 4 Acknowledgments This thesis would not have been possible without the support of many people. My wish is to express humble gratitude to the committee chair, Prof. Sharon Mason, who was perpetually generous in offering her invaluable assistance, support, and guidance. Deepest gratitude is also due to the members of my supervisory committee, Prof. Lawrence Hill and Prof. Jim Leone, without whose knowledge and direction this study would not have been successful. Special thanks also to Prof. Charles Border for his financial support of this thesis and priceless assistance. Profound gratitude to my mother, Moneerah, who has been there from the very beginning, for her support and endless love. I would also like to convey thanks to my wife for her patient and unending encouragement and support throughout the duration of my studies; without my wife's encouragement, I would not have completed this degree. I wish to express my gratitude to my beloved sister and brothers for their kind understanding throughout my studies. Special thanks to my friend, Mohammed Almathami, for his …", "title": "" }, { "docid": "e7bfafee5cfaaa1a6a41ae61bdee753d", "text": "Borderline personality disorder (BPD) has been shown to be a valid and reliable diagnosis in adolescents and associated with a decrease in both general and social functioning. With evidence linking BPD in adolescents to poor prognosis, it is important to develop a better understanding of factors and mechanisms contributing to the development of BPD. This could potentially enhance our knowledge and facilitate the design of novel treatment programs and interventions for this group. In this paper, we outline a theoretical model of BPD in adolescents linking the original mentalization-based theory of BPD, with recent extensions of the theory that focuses on hypermentalizing and epistemic trust. We then provide clinical case vignettes to illustrate this extended theoretical model of BPD. Furthermore, we suggest a treatment approach to BPD in adolescents that focuses on the reduction of hypermentalizing and epistemic mistrust. We conclude with an integration of theory and practice in the final section of the paper and make recommendations for future work in this area. (PsycINFO Database Record", "title": "" }, { "docid": "55ec669a67b88ff0b6b88f1fa6408df9", "text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.", "title": "" }, { "docid": "002acd845aa9776840dfe9e8755d7732", "text": "A detailed study on the mechanism of band-to-band tunneling in carbon nanotube field-effect transistors (CNFETs) is presented. Through a dual-gated CNFET structure tunneling currents from the valence into the conduction band and vice versa can be enabled or disabled by changing the gate potential. Different from a conventional device where the Fermi distribution ultimately limits the gate voltage range for switching the device on or off, current flow is controlled here by the valence and conduction band edges in a bandpass-filter-like arrangement. We discuss how the structure of the nanotube is the key enabler of this particular one-dimensional tunneling effect.", "title": "" }, { "docid": "cb8dbf14b79edd2a3ee045ad08230a30", "text": "Observational data suggest a link between menaquinone (MK, vitamin K2) intake and cardiovascular (CV) health. However, MK intervention trials with vascular endpoints are lacking. We investigated long-term effects of MK-7 (180 µg MenaQ7/day) supplementation on arterial stiffness in a double-blind, placebo-controlled trial. Healthy postmenopausal women (n=244) received either placebo (n=124) or MK-7 (n=120) for three years. Indices of local carotid stiffness (intima-media thickness IMT, Diameter end-diastole and Distension) were measured by echotracking. Regional aortic stiffness (carotid-femoral and carotid-radial Pulse Wave Velocity, cfPWV and crPWV, respectively) was measured using mechanotransducers. Circulating desphospho-uncarboxylated matrix Gla-protein (dp-ucMGP) as well as acute phase markers Interleukin-6 (IL-6), high-sensitive C-reactive protein (hsCRP), tumour necrosis factor-α (TNF-α) and markers for endothelial dysfunction Vascular Cell Adhesion Molecule (VCAM), E-selectin, and Advanced Glycation Endproducts (AGEs) were measured. At baseline dp-ucMGP was associated with IMT, Diameter, cfPWV and with the mean z-scores of acute phase markers (APMscore) and of markers for endothelial dysfunction (EDFscore). After three year MK-7 supplementation cfPWV and the Stiffness Index βsignificantly decreased in the total group, whereas distension, compliance, distensibility, Young's Modulus, and the local carotid PWV (cPWV) improved in women having a baseline Stiffness Index β above the median of 10.8. MK-7 decreased dp-ucMGP by 50 % compared to placebo, but did not influence the markers for acute phase and endothelial dysfunction. In conclusion, long-term use of MK-7 supplements improves arterial stiffness in healthy postmenopausal women, especially in women having a high arterial stiffness.", "title": "" } ]
scidocsrr
ab9d3b2d479121643c7f690057cbb60a
Sentiment Analysis in Social Media Texts
[ { "docid": "52a5f4c15c1992602b8fe21270582cc6", "text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.", "title": "" }, { "docid": "4ef6adf0021e85d9bf94079d776d686d", "text": "Recent years have brought a significant growth in the volume of research in sentiment analysis, mostly on highly subjective text types (movie or product reviews). The main difference these texts have with news articles is that their target is clearly defined and unique across the text. Following different annotation efforts and the analysis of the issues encountered, we realised that news opinion mining is different from that of other text types. We identified three subtasks that need to be addressed: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. Furthermore, we distinguish three different possible views on newspaper articles – author, reader and text, which have to be addressed differently at the time of analysing sentiment. Given these definitions, we present work on mining opinions about entities in English language news, in which (a) we test the relative suitability of various sentiment dictionaries and (b) we attempt to separate positive or negative opinion from good or bad news. In the experiments described here, we tested whether or not subject domain-defining vocabulary should be ignored. Results showed that this idea is more appropriate in the context of news opinion mining and that the approaches taking this into consideration produce a better performance.", "title": "" } ]
[ { "docid": "0b117f379a32b0ba4383c71a692405c8", "text": "Today’s educational policies are largely devoted to fostering the development and implementation of computer applications in education. This paper analyses the skills and competences needed for the knowledgebased society and reveals the role and impact of using computer applications to the teaching and learning processes. Also, the aim of this paper is to reveal the outcomes of a study conducted in order to determine the impact of using computer applications in teaching and learning Management and to propose new opportunities for the process improvement. The findings of this study related to the teachers’ and students’ perceptions about using computer applications for teaching and learning could open further researches on computer applications in education and their educational and economic implications.", "title": "" }, { "docid": "656baf66e6dd638d9f48ea621593bac3", "text": "Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.", "title": "" }, { "docid": "b5fea029d64084089de8e17ae9debffc", "text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.", "title": "" }, { "docid": "c089e788b5cfda6c4a7f518af668bc3a", "text": "The selection of hyper-parameters is critical in Deep Learning. Because of the long training time of complex models and the availability of compute resources in the cloud, “one-shot” optimization schemes – where the sets of hyper-parameters are selected in advance (e.g. on a grid or in a random manner) and the training is executed in parallel – are commonly used. [1] show that grid search is sub-optimal, especially when only a few critical parameters matter, and suggest to use random search instead. Yet, random search can be “unlucky” and produce sets of values that leave some part of the domain unexplored. Quasi-random methods, such as Low Discrepancy Sequences (LDS) avoid these issues. We show that such methods have theoretical properties that make them appealing for performing hyperparameter search, and demonstrate that, when applied to the selection of hyperparameters of complex Deep Learning models (such as state-of-the-art LSTM language models and image classification models), they yield suitable hyperparameters values with much fewer runs than random search. We propose a particularly simple LDS method which can be used as a drop-in replacement for grid/random search in any Deep Learning pipeline, both as a fully one-shot hyperparameter search or as an initializer in iterative batch optimization.", "title": "" }, { "docid": "d1afaada6bf5927d9676cee61d3a1d49", "text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.", "title": "" }, { "docid": "af6f5ef41a3737975893f95796558900", "text": "In this work, we propose a multi-task convolutional neural network learning approach that can simultaneously perform iris localization and presentation attack detection (PAD). The proposed multi-task PAD (MT-PAD) is inspired by an object detection method which directly regresses the parameters of the iris bounding box and computes the probability of presentation attack from the input ocular image. Experiments involving both intra-sensor and cross-sensor scenarios suggest that the proposed method can achieve state-of-the-art results on publicly available datasets. To the best of our knowledge, this is the first work that performs iris detection and iris presentation attack detection simultaneously.", "title": "" }, { "docid": "e7586aea8381245cfa07239158d115af", "text": "The interpolation, prediction, and feature analysis of fine-gained air quality are three important topics in the area of urban air computing. The solutions to these topics can provide extremely useful information to support air pollution control, and consequently generate great societal and technical impacts. Most of the existing work solves the three problems separately by different models. In this paper, we propose a general and effective approach to solve the three problems in one model called the Deep Air Learning (DAL). The main idea of DAL lies in embedding feature selection and semi-supervised learning in different layers of the deep learning network. The proposed approach utilizes the information pertaining to the unlabeled spatio-temporal data to improve the performance of the interpolation and the prediction, and performs feature selection and association analysis to reveal the main relevant features to the variation of the air quality. We evaluate our approach with extensive experiments based on real data sources obtained in Beijing, China. Experiments show that DAL is superior to the peer models from the recent literature when solving the topics of interpolation, prediction, and feature analysis of fine-gained air quality.", "title": "" }, { "docid": "7f75e0b789e7b2bbaa47c7fa06efb852", "text": "A significant increase in the capability for controlling motion dynamics in key frame animation is achieved through skeleton control. This technique allows an animator to develop a complex motion sequence by animating a stick figure representation of an image. This control sequence is then used to drive an image sequence through the same movement. The simplicity of the stick figure image encourages a high level of interaction during the design stage. Its compatibility with the basic key frame animation technique permits skeleton control to be applied selectively to only those components of a composite image sequence that require enhancement.", "title": "" }, { "docid": "e8a2ef4ded8ba4fa2e36588015c2c61a", "text": "The interdisciplinary character of Bio-Inspired Design (BID) has resulted in a plethora of approaches and methods that propose different types of design processes. Although sustainable, creative and complex system design processes are not mutually incompatible they do focus on different aspects of design. This research defines areas of focus for the development of computational tools to support biomimetics, technical problem solving through abstraction, transfer and application of knowledge from biological models. An overview of analysed literature is provided as well as a qualitative analysis of the main themes found in BID literature. The result is a set of recommendations for further research on Computer-Aided Biomimetics (CAB).", "title": "" }, { "docid": "d4ac52a52e780184359289ecb41e321e", "text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.", "title": "" }, { "docid": "2ec973e31082953bd743dc659f417645", "text": "Object detection, including objectness detection (OD), salient object detection (SOD), and category-specific object detection (COD), is one of the most fundamental yet challenging problems in the computer vision community. Over the last several decades, great efforts have been made by researchers to tackle this problem, due to its broad range of applications for other computer vision tasks such as activity or event recognition, content-based image retrieval and scene understanding, etc. While numerous methods have been presented in recent years, a comprehensive review for the proposed high-quality object detection techniques, especially for those based on advanced deep-learning techniques, is still lacking. To this end, this article delves into the recent progress in this research field, including 1) definitions, motivations, and tasks of each subdirection; 2) modern techniques and essential research trends; 3) benchmark data sets and evaluation metrics; and 4) comparisons and analysis of the experimental results. More importantly, we will reveal the underlying relationship among OD, SOD, and COD and discuss in detail some open questions as well as point out several unsolved challenges and promising future works.", "title": "" }, { "docid": "9c38fcfcbfeaf0072e723bd7e1e7d17d", "text": "BACKGROUND\nAllicin (diallylthiosulfinate) is the major volatile- and antimicrobial substance produced by garlic cells upon wounding. We tested the hypothesis that allicin affects membrane function and investigated 1) betanine pigment leakage from beetroot (Beta vulgaris) tissue, 2) the semipermeability of the vacuolar membrane of Rhoeo discolor cells, 3) the electrophysiology of plasmalemma and tonoplast of Chara corallina and 4) electrical conductivity of artificial lipid bilayers.\n\n\nMETHODS\nGarlic juice and chemically synthesized allicin were used and betanine loss into the medium was monitored spectrophotometrically. Rhoeo cells were studied microscopically and Chara- and artificial membranes were patch clamped.\n\n\nRESULTS\nBeet cell membranes were approximately 200-fold more sensitive to allicin on a mol-for-mol basis than to dimethyl sulfoxide (DMSO) and approximately 400-fold more sensitive to allicin than to ethanol. Allicin-treated Rhoeo discolor cells lost the ability to plasmolyse in an osmoticum, confirming that their membranes had lost semipermeability after allicin treatment. Furthermore, allicin and garlic juice diluted in artificial pond water caused an immediate strong depolarization, and a decrease in membrane resistance at the plasmalemma of Chara, and caused pore formation in the tonoplast and artificial lipid bilayers.\n\n\nCONCLUSIONS\nAllicin increases the permeability of membranes.\n\n\nGENERAL SIGNIFICANCE\nSince garlic is a common foodstuff the physiological effects of its constituents are important. Allicin's ability to permeabilize cell membranes may contribute to its antimicrobial activity independently of its activity as a thiol reagent.", "title": "" }, { "docid": "f2fa4fa43c21e8c65c752d6ad1d39d06", "text": "Singing voice synthesis techniques have been proposed based on a hidden Markov model (HMM). In these approaches, the spectrum, excitation, and duration of singing voices are simultaneously modeled with context-dependent HMMs and waveforms are generated from the HMMs themselves. However, the quality of the synthesized singing voices still has not reached that of natural singing voices. Deep neural networks (DNNs) have largely improved on conventional approaches in various research areas including speech recognition, image recognition, speech synthesis, etc. The DNN-based text-to-speech (TTS) synthesis can synthesize high quality speech. In the DNN-based TTS system, a DNN is trained to represent the mapping function from contextual features to acoustic features, which are modeled by decision tree-clustered context dependent HMMs in the HMM-based TTS system. In this paper, we propose singing voice synthesis based on a DNN and evaluate its effectiveness. The relationship between the musical score and its acoustic features is modeled in frames by a DNN. For the sparseness of pitch context in a database, a musical-note-level pitch normalization and linear-interpolation techniques are used to prepare the excitation features. Subjective experimental results show that the DNN-based system outperformed the HMM-based system in terms of naturalness.", "title": "" }, { "docid": "dbc463f080610e2ec1cf1841772d1d92", "text": "Malware is one of the greatest and most rapidly growing threats to the digital world. Traditional signature-based detection is no longer adequate to detect new variants and highly targeted malware. Furthermore, dynamic detection is often circumvented with anti-VM and/or anti-debugger techniques. Recently heuristic approaches have been explored to enhance detection accuracy while maintaining the generality of a model to detect unknown malware samples. In this paper, we investigate three feature types extracted from memory images - registry activity, imported libraries, and API function calls. After evaluating the importance of the different features, different machine learning techniques are implemented to compare performances of malware detection using the three feature types, respectively. The highest accuracy achieved was 96%, and was reached using a support vector machine model, fitted on data extracted from registry activity.", "title": "" }, { "docid": "23d42976a9651203e0d4dd1c332234ae", "text": "BACKGROUND\nStatistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem.\n\n\nRESULTS\nThe terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs .\n\n\nCONCLUSIONS\nThe Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.", "title": "" }, { "docid": "9c5a32c49d3e9eff842f155f99facd08", "text": "Urdu is morphologically rich language with different nature of its characters. Urdu text tokenization and sentence boundary disambiguation is difficult as compared to the language like English. Major hurdle for tokenization is improper use of space between words, where as absence of case discrimination makes the sentence boundary detection a difficult task. In this paper some issues regarding both of these language processing tasks have been identified.", "title": "" }, { "docid": "51fc49d6196702f87e7dae215fa93108", "text": "Automatic classification of cancer lesions in tissues observed using gastroenterology imaging is a non-trivial pattern recognition task involving filtering, segmentation, feature extraction and classification. In this paper we measure the impact of a variety of segmentation algorithms (mean shift, normalized cuts, level-sets) on the automatic classification performance of gastric tissue into three classes: cancerous, pre-cancerous and normal. Classification uses a combination of color (hue-saturation histograms) and texture (local binary patterns) features, applied to two distinct imaging modalities: chromoendoscopy and narrow-band imaging. Results show that mean-shift obtains an interesting performance for both scenarios producing low classification degradations (6%), full image classification is highly inaccurate reinforcing the importance of segmentation research for Gastroenterology, and confirm that Patch Index is an interesting measure of the classification potential of small to medium segmented regions.", "title": "" }, { "docid": "db6e3742a0413ad5f44647ab1826b796", "text": "Endometrial stromal sarcoma is a rare tumor and has unique histopathologic features. Most tumors of this kind occur in the uterus; thus, the vagina is an extremely rare site. A 34-year-old woman presented with endometrial stromal sarcoma arising in the vagina. No correlative endometriosis was found. Because of the uncommon location, this tumor was differentiated from other more common neoplasms of the vagina, particularly embryonal rhabdomyosarcoma and other smooth muscle tumors. Although the pathogenesis of endometrial stromal tumors remains controversial, the most common theory of its origin is heterotopic Müllerian tissue such as endometriosis tissue. Primitive cells of the pelvis and retroperitoneum are an alternative possible origin for the tumor if endometriosis is not present. According to the literature, the tumor has a fairly good prognosis compared with other vaginal sarcomas. Surgery combined with adjuvant radiotherapy appears to be an adequate treatment.", "title": "" }, { "docid": "51743d233ec269cfa7e010d2109e10a6", "text": "Stress is a part of every life to varying degrees, but individuals differ in their stress vulnerability. Stress is usefully viewed from a biological perspective; accordingly, it involves activation of neurobiological systems that preserve viability through change or allostasis. Although they are necessary for survival, frequent neurobiological stress responses increase the risk of physical and mental health problems, perhaps particularly when experienced during periods of rapid brain development. Recently, advances in noninvasive measurement techniques have resulted in a burgeoning of human developmental stress research. Here we review the anatomy and physiology of stress responding, discuss the relevant animal literature, and briefly outline what is currently known about the psychobiology of stress in human development, the critical role of social regulation of stress neurobiology, and the importance of individual differences as a lens through which to approach questions about stress experiences during development and child outcomes.", "title": "" }, { "docid": "6ef244a7eb6a5df025e282e1cc5f90aa", "text": "Public infrastructure-as-a-service clouds, such as Amazon EC2 and Microsoft Azure allow arbitrary clients to run virtual machines (VMs) on shared physical infrastructure. This practice of multi-tenancy brings economies of scale, but also introduces the threat of malicious VMs abusing the scheduling of shared resources. Recent works have shown how to mount crossVM side-channel attacks to steal cryptographic secrets. The straightforward solution is hard isolation that dedicates hardware to each VM. However, this comes at the cost of reduced efficiency. We investigate the principle of soft isolation: reduce the risk of sharing through better scheduling. With experimental measurements, we show that a minimum run time (MRT) guarantee for VM virtual CPUs that limits the frequency of preemptions can effectively prevent existing Prime+Probe cache-based side-channel attacks. Through experimental measurements, we find that the performance impact of MRT guarantees can be very low, particularly in multi-core settings. Finally, we integrate a simple per-core CPU state cleansing mechanism, a form of hard isolation, into Xen. It provides further protection against side-channel attacks at little cost when used in conjunction with an MRT guarantee.", "title": "" } ]
scidocsrr
62fb52220818dbad051dadbd16f37eb6
Managing Conflicts in Goal-Driven Requirements Engineering
[ { "docid": "2248c955d3fd7d8119fde48560db1962", "text": "Requirements engineering is concerned with the identification of high-level goals to be achieved by the system envisioned, the refinement of such goals, the operationalization of goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices and programs. Goal refinement and operationalization is a complex process which is not well supported by current requirements engineering technology. Ideally some form of formal support should be provided, but formal methods are difficult and costly to apply at this stage.This paper presents an approach to goal refinement and operationalization which is aimed at providing constructive formal support while hiding the underlying mathematics. The principle is to reuse generic refinement patterns from a library structured according to strengthening/weakening relationships among patterns. The patterns are once for all proved correct and complete. They can be used for guiding the refinement process or for pointing out missing elements in a refinement. The cost inherent to the use of a formal method is thus reduced significantly. Tactics are proposed to the requirements engineer for grounding pattern selection on semantic criteria.The approach is discussed in the context of the multi-paradigm language used in the KAOS method; this language has an external semantic net layer for capturing goals, constraints, agents, objects and actions together with their links, and an inner formal assertion layer that includes a real-time temporal logic for the specification of goals and constraints. Some frequent refinement patterns are high-lighted and illustrated through a variety of examples.The general principle is somewhat similar in spirit to the increasingly popular idea of design patterns, although it is grounded on a formal framework here.", "title": "" } ]
[ { "docid": "ff001ac2e44a6118b13866d01c2826cf", "text": "In this paper, a miniaturized 122 GHz ISM band FMCW radar is used to achieve micrometer accuracy. The radar consists of a SiGe single chip radar sensor and LCP off-chip antennas. The antennas are integrated in a QFN package. To increase the gain of the radar, an additional lens is used. A combined frequency and phase evaluation algorithm provides micrometer accuracy. The influence of the lens phase center on the beat frequency phase and hence, the overall accuracy is shown. Furthermore, accuracy limitations of the radar system over larger measurement distances are investigated. Accuracies of 200 μm and 2 μm are achieved over a distance of 1.9 m and 5 mm, respectively.", "title": "" }, { "docid": "fcf84abf8b829c33a5da1716e390971d", "text": "The value of a visualization evolved in a digital humanities project is per se not evenly high for both involved research fields. When an approach is too complex – which counts as a strong argument for a publication in a visualization realm – it might get invaluable for humanities scholars due to problems of comprehension. On the other hand, if a clean, easily comprehensible visualization is valuable for a humanities scholar, the missing novelty most likely impedes a computer science publication. My own digital humanities background has shown that it is indeed a balancing act to generate beneficial research results for both the visualization and the digital humanities fields. To find out how visualizations are used as means to communicate humanities matters and to assess the impact of the visualization community to the digital humanities field, I surveyed the long papers of the last four annual digital humanities conferences, discovering that visualization scholars are rarely involved in collaborations that produce valuable digital humanities results, in other words, it seems hard to walk the tightrope of generating valuable research for both fields. Derived from my own digital humanities experiences, I suggest a methodology how to design a digital humanities project to overcome this issue.", "title": "" }, { "docid": "170f14fbf337186c8bd9f36390916d2e", "text": "In this paper, we draw upon two sets of theoretical resources to develop a comprehensive theory of sexual offender rehabilitation named the Good Lives Model-Comprehensive (GLM-C). The original Good Lives Model (GLM-O) forms the overarching values and principles guiding clinical practice in the GLM-C. In addition, the latest sexual offender theory (i.e., the Integrated Theory of Sexual Offending; ITSO) provides a clear etiological grounding for these principles. The result is a more substantial and improved rehabilitation model that is able to conceptually link latest etiological theory with clinical practice. Analysis of the GLM-C reveals that it also has the theoretical resources to secure currently used self-regulatory treatment practice within a meaningful structure. D 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5e6990d8f1f81799e2e7fdfe29d14e4d", "text": "Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.", "title": "" }, { "docid": "fda176005a8edbec3d6dd4796826bb27", "text": "In the perspective of a sustainable urban planning, it is necessary to investigate cities in a holistic way and to accept surprises in the response of urban environments to a particular set of strategies. For example, the process of inner-city densification may limit air pollution, carbon emissions, and energy use through reduced transportation; on the other hand, the resulting street canyons could lead to local levels of pollution that could be higher than in a low-density urban setting. The holistic approach to sustainable urban planning implies using different models in an integrated way that is capable of simulating the urban system. As the interconnection of such models is not a trivial task, one of the key elements that may be applied is the description of the urban geometric properties in an “interoperable” way. Focusing on air quality as one of the most pronounced urban problems, the geometric aspects of a city may be described by objects such as those defined in CityGML, so that an appropriate air quality model can be applied for estimating the quality of the urban air on the basis of atmospheric flow and chemistry equations. It is generally admitted that an ontology-based approach can provide a generic and robust way to interconnect different models. However, a direct approach, that consists in establishing correspondences between concepts, is not sufficient in the present situation. One has to take into account, among other things, the computations involved in the correspondences between concepts. In this paper we first present theoretical background and motivations for the interconnection of 3D city models and other models related to sustainable development and urban planning. Then we present a practical experiment based on the interconnection of CityGML with an air quality model. Our approach is based on the creation of an ontology of air quality models and on the extension of an ontology of urban planning process (OUPP) that acts as an ontology mediator.", "title": "" }, { "docid": "b3cdd76dd50bea401ede3bb945c377dc", "text": "First we report on a new threat campaign, underway in Korea, which infected around 20,000 Android users within two months. The campaign attacked mobile users with malicious applications spread via different channels, such as email attachments or SMS spam. A detailed investigation of the Android malware resulted in the identification of a new Android malware family Android/BadAccents. The family represents current state-of-the-art in mobile malware development for banking trojans. Second, we describe in detail the techniques this malware family uses and confront them with current state-of-the-art static and dynamic codeanalysis techniques for Android applications. We highlight various challenges for automatic malware analysis frameworks that significantly hinder the fully automatic detection of malicious components in current Android malware. Furthermore, the malware exploits a previously unknown tapjacking vulnerability in the Android operating system, which we describe. As a result of this work, the vulnerability, affecting all Android versions, will be patched in one of the next releases of the Android Open Source Project.", "title": "" }, { "docid": "3380497ab11a7f0e34e8095d35a83f71", "text": "The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective. However, this technique does not easily apply to commonly used distributions such as beta or gamma without further approximations, and most practical applications of the reparameterization gradient fit Gaussian distributions. In this paper, we introduce the generalized reparameterization gradient, a method that extends the reparameterization gradient to a wider class of variational distributions. Generalized reparameterizations use invertible transformations of the latent variables which lead to transformed distributions that weakly depend on the variational parameters. This results in new Monte Carlo gradients that combine reparameterization gradients and score function gradients. We demonstrate our approach on variational inference for two complex probabilistic models. The generalized reparameterization is e ective: even a single sample from the variational distribution is enough to obtain a low-variance gradient.", "title": "" }, { "docid": "0a9a94bd83dfbbba2815f8575f1cb8a3", "text": "To create with an autonomous mobile robot a 3D volumetric map of a scene it is necessary to gage several 3D scans and to merge them into one consistent 3D model. This paper provides a new solution to the simultaneous localization and mapping (SLAM) problem with six degrees of freedom. Robot motion on natural surfaces has to cope with yaw, pitch and roll angles, turning pose estimation into a problem in six mathematical dimensions. A fast variant of the Iterative Closest Points algorithm registers the 3D scans in a common coordinate system and relocalizes the robot. Finally, consistent 3D maps are generated using a global relaxation. The algorithms have been tested with 3D scans taken in the Mathies mine, Pittsburgh, PA. Abandoned mines pose significant problems to society, yet a large fraction of them lack accurate 3D maps.", "title": "" }, { "docid": "24a10176ec2367a6a0b5333d57b894b8", "text": "Automated classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. We have investigated this possibility experimentally and numerically using a diffraction imaging approach. A fast image analysis software based on the gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images. The results of GLCM analysis and subsequent classification demonstrate the potential for rapid classification among six types of cultured cells. Combined with numerical results we show that the method of diffraction imaging flow cytometry has the capacity as a platform for high-throughput and label-free classification of biological cells.", "title": "" }, { "docid": "4073fc19e108b11c80b71dfb9cb73268", "text": "In today's fast-paced, fiercely competitive world of commercial new product development, speed and flexibility are essential. Companies are increasingly realizing that the old, sequential approach to developing new products simply won't get the job done. Instead, companies in Japan and the United States are using a holistic method—as in rugby, the ball gets passed within the team as it moves as a unit up the field. This holistic approach has six characteristics: built-in instability, self-organizing project teams, overlapping development phases, \"multilearning,\" subtle control, and organizational transfer of learning. The six pieces fit together like a jigsaw puzzle, forming a fast and flexible process for new product development, fust as important, the new approach can act as a change agent: it is a vehicle for introducing creative, market-driven ideas and processes into an old, rigid organization. Mr. Takeuchi is an associate professor and Mr. Nonaka, a professor at Hitotsubashi University in fapan. Mr. Takeuchi's research has focused on marketing and global competition. Mr Nonaka has published widely in Japan on organizations, strategy, and marketing. The rules of the game in new product development are changing. Many companies have discovered that it takes more than the accepted basics of high quality, lov^ cost, and differentiation to excel in today's competitive market. It also takes speed and flexibility. This change is reflected in the emphasis companies are placing on new products as a source of new sales and profits. At 3M, for example, products less than five years old account for 25%* of sales. A 1981 survey of 700 U.S. companies indicated that new products would account for one-third of all profits in the 1980s, an increase from one-fifth in the 1970s.' This new emphasis on speed and flexibility calls for a different approach for managing new product development. The traditional sequential or \"relay race\" approach to product developmentexemplified by the National Aeronautics and Space Administration's phased program planning (PPP) system-may conflict with the goals of maximum speed and flexibility. Instead, a holistic or \"rugby\" approach-where a team tries to go the distance as a unit, passing the ball back and forth-may better serve today's competitive requirements. Under the old approach, a product development process moved like a relay race, with one group of functional specialists passing the baton to the next group. The project went sequentially from phase to phase: concept development, feasibility testing, product design, development process, pilot produc-", "title": "" }, { "docid": "09623c821f05ffb7840702a5869be284", "text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.", "title": "" }, { "docid": "30a17bdce5eb936aad1ddf56c285e808", "text": "Currently, 4G mobile communication systems are supported by the 3GPP standard. In view of the significant increase in mobile data traffic, it is necessary to characterize it to improve the performance of current wireless networks. Indeed, video transmission and video streaming are fundamental assets for the upcoming smart cities and urban environments. Due to the high costs of deploying a real LTE system, emulation systems that consider real operating conditions emerge as a successful alternative. On the other hand, many studies with LTE simulations and emulations do not present information of basic adjustment parameters like the propagation model, nor of validation of the results with real conditions. This paper shows the validation with an ANOVA statistical analysis of an LTE emulation system developed in NS-3 for the live video streaming service. For the validation, different QoS parameters and real conditions have been used. Also, two protocols, namely RTMP and RTSP, have been tested. It is demonstrated that the emulation scenario is appropriate to characterize the traffic that will later allow to carry out a proper performance analysis of the service and technology under study.", "title": "" }, { "docid": "0c45c054ce15200de26c4c39be5c420d", "text": "Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.", "title": "" }, { "docid": "0472d4f6c84524a73b7e902cd2d3e9ba", "text": "Article history: Received 24 January 2015 Received in revised form 12 August 2016 Accepted 16 August 2016 Available online xxxx E-commerce has provided newopportunities for both businesses and consumers to easily share information,find and buy a product, increasing the ease of movement from one company to another as well as to increase the risk of churn. In this study we develop a churn prediction model tailored for B2B e-commerce industry by testing the forecasting capability of a newmodel, the support vector machine (SVM) based on the AUC parameter-selection technique (SVMauc). The predictive performance of SVMauc is benchmarked to logistic regression, neural network and classic support vector machine. Our study shows that the parameter optimization procedure plays an important role in the predictive performance and the SVMauc points out good generalization performance when applied to noisy, imbalance and nonlinear marketing data outperforming the other methods. Thus, our findings confirm that the data-driven approach to churn prediction and the development of retention strategies outperforms commonly used managerial heuristics in B2B e-commerce industry. © 2016 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "15e31918fcebb95beaf381d93d7605a5", "text": "One challenge for UHF RFID passive tag design is to obtain a low-profile antenna that minimizes the influence of near-body or attached objects without sacrificing both read range and universal UHF RFID band interoperability. A new improved design of a RFID passive tag antenna is presented that performs well near problematic surfaces (human body, liquids, metals) across most of the universal UHF RFID (840-960 MHz) band. The antenna is based on a low-profile printed configuration with slots, and it is evaluated through extensive simulations and experimental tests.", "title": "" }, { "docid": "c1a6b9df700226212dca8857e7001896", "text": "Knowing the location of a social media user and their posts is important for various purposes, such as the recommendation of location-based items/services, and locality detection of crisis/disasters. This paper describes our submission to the shared task “Geolocation Prediction in Twitter” of the 2nd Workshop on Noisy User-generated Text. In this shared task, we propose an algorithm to predict the location of Twitter users and tweets using a multinomial Naive Bayes classifier trained on Location Indicative Words and various textual features (such as city/country names, #hashtags and @mentions). We compared our approach against various baselines based on Location Indicative Words, city/country names, #hashtags and @mentions as individual feature sets, and experimental results show that our approach outperforms these baselines in terms of classification accuracy, mean and median error distance.", "title": "" }, { "docid": "2d3b452d7a8cf8f29ac1896f14c43faa", "text": "Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, much attention has been paid to Automatic Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding, distributed representation of words, has shown an excellent performance that allows words to match on semantic level. Naively concatenating word embeddings makes the common word dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the input matrix of Latent Semantic Analysis method. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. The new weighting schemes are modified versions of the augment weight and the entropy frequency. The new schemes combine the strength of the traditional weighting schemes and word embedding. The proposed approach is experimentally evaluated on three well-known English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization for English. The proposed model performs comprehensively better compared to the state-of-the-art methods, by at least 1% ROUGE points, leading to a conclusion that it provides a better document representation and a better document summary as a result.", "title": "" }, { "docid": "6a470404c36867a18a98fafa9df6848f", "text": "Memory links use variable-impedance drivers, feed-forward equalization (FFE) [1], on-die termination (ODT) and slew-rate control to optimize the signal integrity (SI). An asymmetric DRAM link configuration exploits the availability of a fast CMOS technology on the memory controller side to implement powerful equalization, while keeping the circuit complexity on the DRAM side relatively simple. This paper proposes the use of Tomlinson Harashima precoding (THP) [2-4] in a memory controller as replacement of the afore-mentioned SI optimization techniques. THP is a transmitter equalization technique in which post-cursor inter-symbol interference (ISI) is cancelled by means of an infinite impulse response (IIR) filter with modulo-based amplitude limitation; similar to a decision feedback equalizer (DFE) on the receive side. However, in contrast to a DFE, THP does not suffer from error propagation.", "title": "" }, { "docid": "b48d9053c70f51aa766a3f4706912654", "text": "Social tags are free text labels that are applied to items such as artists, albums and songs. Captured in these tags is a great deal of information that is highly relevant to Music Information Retrieval (MIR) researchers including information about genre, mood, instrumentation, and quality. Unfortunately there is also a great deal of irrelevant information and noise in the tags. Imperfect as they may be, social tags are a source of human-generated contextual knowledge about music that may become an essential part of the solution to many MIR problems. In this article, we describe the state of the art in commercial and research social tagging systems for music. We describe how tags are collected and used in current systems. We explore some of the issues that are encountered when using tags, and we suggest possible areas of exploration for future research.", "title": "" }, { "docid": "f8c3d3211b1a79cb6ef3fa036a849535", "text": "Income is known to be associated with happiness 1 , but debates persist about the exact nature of this relationship 2,3 . Does happiness rise indefinitely with income, or is there a point at which higher incomes no longer lead to greater well-being? We examine this question using data from the Gallup World Poll, a representative sample of over 1.7 million individuals worldwide. Controlling for demographic factors, we use spline regression models to statistically identify points of ‘income satiation’. Globally, we find that satiation occurs at $95,000 for life evaluation and $60,000 to $75,000 for emotional well-being. However, there is substantial variation across world regions, with satiation occurring later in wealthier regions. We also find that in certain parts of the world, incomes beyond satiation are associated with lower life evaluations. These findings on income and happiness have practical and theoretical significance at the individual, institutional and national levels. They point to a degree of happiness adaptation 4,5 and that money influences happiness through the fulfilment of both needs and increasing material desires 6 . Jebb et al. use data from the Gallup World Poll to show that happiness does not rise indefinitely with income: globally, income satiation occurs at US$95,000 for life evaluation and US$60,000 to US$75,000 for emotional well-being.", "title": "" } ]
scidocsrr
79de92bde0c38515918923ff0f2451aa
An improved GA and a novel PSO-GA-based hybrid algorithm
[ { "docid": "d8780989fc125b69beb456986819d624", "text": "The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration–exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given.  2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "ea1e84dfb1889826b0356dcd85182ec4", "text": "With the support of the wearable devices, healthcare services started a new phase in serving patients need. The new technology adds more facilities and luxury to the healthcare services, Also changes patients' lifestyles from the traditional way of monitoring to the remote home monitoring. Such new approach faces many challenges related to security as sensitive data get transferred through different type of channels. They are four main dimensions in terms of security scope such as trusted sensing, computation, communication, privacy and digital forensics. In this paper we will try to focus on the security challenges of the wearable devices and IoT and their advantages in healthcare sectors.", "title": "" }, { "docid": "894164566e284f0e4318d94cc6768871", "text": "This paper investigates the problems of signal reconstruction and blind deconvolution for graph signals that have been generated by an originally sparse input diffused through the network via the application of a graph filter operator. Assuming that the support of the sparse input signal is unknown, and that the diffused signal is observed only at a subset of nodes, we address the related problems of: 1) identifying the input and 2) interpolating the values of the diffused signal at the non-sampled nodes. We first consider the more tractable case where the coefficients of the diffusing graph filter are known and then address the problem of joint input and filter identification. The corresponding blind identification problems are formulated, novel convex relaxations are discussed, and modifications to incorporate a priori information on the sparse inputs are provided.", "title": "" }, { "docid": "c0d4068fa86fd14b0170a2acf1fdd252", "text": "This paper presents a 15-bit digital power amplifier (DPA) with 1.6W saturated output power. The topology of the polar switched-current DPA is discussed together with the architecture of the output transformer which is implemented in BEOL as well as in WLCSP metal layers. The chip is fabricated in a standard 28nm CMOS process and exhibits an EVM of 3.6%, E-UTRA ACLR of 34.1dB, output noise of −145.7dBc/Hz at 45 MHz offset and 22.4% DPA efficiency when generating a 26.8dBm LTE-1.4 output signal at 2.3GHz. The total area of the DPA is 0.5mm2.", "title": "" }, { "docid": "23ca24a7920f98796cf9ac695be3ffae", "text": "As software systems become more complex and configurable, failures due to misconfigurations are becoming a critical problem. Such failures often have serious functionality, security and financial consequences. Further, diagnosis and remediation for such failures require reasoning across the software stack and its operating environment, making it difficult and costly. We present a framework and tool called EnCore to automatically detect software misconfigurations. EnCore takes into account two important factors that are unexploited before: the interaction between the configuration settings and the executing environment, as well as the rich correlations between configuration entries. We embrace the emerging trend of viewing systems as data, and exploit this to extract information about the execution environment in which a configuration setting is used. EnCore learns configuration rules from a given set of sample configurations. With training data enriched with the execution context of configurations, EnCore is able to learn a broad set of configuration anomalies that spans the entire system. EnCore is effective in detecting both injected errors and known real-world problems - it finds 37 new misconfigurations in Amazon EC2 public images and 24 new configuration problems in a commercial private cloud. By systematically exploiting environment information and by learning correlation rules across multiple configuration settings, EnCore detects 1.6x to 3.5x more misconfiguration anomalies than previous approaches.", "title": "" }, { "docid": "8da0bdec21267924d16f9a04e6d9a7ef", "text": "Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section).", "title": "" }, { "docid": "fea12b3870cdb978b33e480482124cfd", "text": "The activity of labeling of documents according to their content is known as text categorization. Many experiments have been carried out to enhance text categorization by adding background knowledge to the document using knowledge repositories like Word Net, Open Project Directory (OPD), Wikipedia and Wikitology. In our previous work, we have carried out intensive experiments by extracting knowledge from Wikitology and evaluating the experiment on Support Vector Machine with 10- fold cross-validations. The results clearly indicate Wikitology is far better than other knowledge bases. In this paper we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers under text enrichment through Wikitology. We validated results with 10-fold cross validation and shown that NB gives an improvement of +28.78%, on the other hand SVM gives an improvement of +636% when compared with baseline results. Naïve Bayes classifier is better choice when external enriching is used through any external knowledge base.", "title": "" }, { "docid": "ecabde376c5611240e35d3eb574b1979", "text": "For high precision Synthetic Aperture Radar (SAR) processing, the determination of the Doppler centroid is indispensable. The Doppler frequency estimated from azimuth spectra, however, suffers from the fact that the data are sampled with the pulse repetition frequency (PRF) and an ambiguity about the correct PRF band remains. A new algorithm to resolve this ambiguity is proposed. It uses the fact that the Doppler centroid depends linearly on the transmitted radar frequency for a given antenna squint angle. This dependence is not subject to PRF ambiguities. It can be measured by Fourier transforming the SAR data in the range direction and estimating the Doppler centroid at each range frequency. The achievable accuracy is derived theoretically and verified with Seasat data of different scene content. The algorithm works best with low contrast scenes, where the conventional look correlation technique fails. It needs no iterative processing of the SAR data and causes only low computational load.", "title": "" }, { "docid": "645e69205aea3887d954f825306a1052", "text": "Continuous outlier detection in data streams has important applications in fraud detection, network security, and public health. The arrival and departure of data objects in a streaming manner impose new challenges for outlier detection algorithms, especially in time and space efficiency. In the past decade, several studies have been performed to address the problem of distance-based outlier detection in data streams (DODDS), which adopts an unsupervised definition and does not have any distributional assumptions on data values. Our work is motivated by the lack of comparative evaluation among the state-of-the-art algorithms using the same datasets on the same platform. We systematically evaluate the most recent algorithms for DODDS under various stream settings and outlier rates. Our extensive results show that in most settings, the MCOD algorithm offers the superior performance among all the algorithms, including the most recent algorithm Thresh LEAP.", "title": "" }, { "docid": "254f2ef4608ea3c959e049073ad063f8", "text": "Recently, the long-term evolution (LTE) is considered as one of the most promising 4th generation (4G) mobile standards to increase the capacity and speed of mobile handset networks [1]. In order to realize the LTE wireless communication system, the diversity and multiple-input multiple-output (MIMO) systems have been introduced [2]. In a MIMO mobile user terminal such as handset or USB dongle, at least two uncorrelated antennas should be placed within an extremely restricted space. This task becomes especially difficult when a MIMO planar antenna is designed for LTE band 13 (the corresponding wavelength is 390 mm). Due to the limited space available for antenna elements, the antennas are strongly coupled with each other and have narrow bandwidth.", "title": "" }, { "docid": "c4b6df3abf37409d6a6a19646334bffb", "text": "Classification in imbalanced domains is a recent challenge in data mining. We refer to imbalanced classification when data presents many examples from one class and few from the other class, and the less representative class is the one which has more interest from the point of view of the learning task. One of the most used techniques to tackle this problem consists in preprocessing the data previously to the learning process. This preprocessing could be done through under-sampling; removing examples, mainly belonging to the majority class; and over-sampling, by means of replicating or generating new minority examples. In this paper, we propose an under-sampling procedure guided by evolutionary algorithms to perform a training set selection for enhancing the decision trees obtained by the C4.5 algorithm and the rule sets obtained by PART rule induction algorithm. The proposal has been compared with other under-sampling and over-sampling techniques and the results indicate that the new approach is very competitive in terms of accuracy when comparing with over-sampling and it outperforms standard under-sampling. Moreover, the obtained models are smaller in terms of number of leaves or rules generated and they can considered more interpretable. The results have been contrasted through non-parametric statistical tests over multiple data sets. Crown Copyright 2009 Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1027ce2c8e3a231fe8ab3f469a857f82", "text": "There are two major challenges for a high-performance remote-sensing database. First, it must provide low-latency retrieval of very large volumes of spatio-temporal data. This requires effective declustering and placement of a multidimensional dataset onto a large disk farm. Second, the order of magnitude reduction in data-size due to postprocessing makes it imperative, from a performance perspective, that the postprocessing be done on the machine that holds the data. This requires careful coordination of computation and data retrieval. This paper describes the design, implementation and evaluation of Titan, a parallel shared-nothing database designed for handling remotesensing data. The computational platform for Titan is a 16-processor IBM SP-2 with four fast disks attached to each processor. Titan is currently operational and contains about 24 GB of AVHRR data from the NOAA-7 satellite. The experimental results show that Titan provides good performance for global queries and interactive response times for local queries.", "title": "" }, { "docid": "638336dba1dd589b0f708a9426483827", "text": "Girard's linear logic can be used to model programming languages in which each bound variable name has exactly one \"occurrence\"---i.e., no variable can have implicit \"fan-out\"; multiple uses require explicit duplication. Among other nice properties, \"linear\" languages need no garbage collector, yet have no dangling reference problems. We show a natural equivalence between a \"linear\" programming language and a stack machine in which the top items can undergo arbitrary permutations. Such permutation stack machines can be considered combinator abstractions of Moore's Forth programming language.", "title": "" }, { "docid": "dcee2be83eba32476268e1e4383b570d", "text": "Recent advances in the field of nanotechnology have led to the synthesis and characterization of an assortment of quasi-one-dimensional (Q1D) structures, such as nanowires, nanoneedles, nanobelts and nanotubes. These fascinating materials exhibit novel physical properties owing to their unique geometry with high aspect ratio. They are the potential building blocks for a wide range of nanoscale electronics, optoelectronics, magnetoelectronics, and sensing devices. Many techniques have been developed to grow these nanostructures with various compositions. Parallel to the success with group IV and groups III–V compounds semiconductor nanostructures, semiconducting metal oxide materials with typically wide band gaps are attracting increasing attention. This article provides a comprehensive review of the state-of-the-art research activities that focus on the Q1D metal oxide systems and their physical property characterizations. It begins with the synthetic mechanisms and methods that have been exploited to form these structures. A range of remarkable characteristics are then presented, organized into sections covering a number of metal oxides, such as ZnO, In2O3, SnO2, Ga2O3, and TiO2, etc., describing their electrical, optical, magnetic, mechanical and chemical sensing properties. These studies constitute the basis for developing versatile applications based on metal oxide Q1D systems, and the current progress in device development will be highlighted. # 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "33ba3582dc7873a7e14949775a9b26c1", "text": "Few conservation projects consider climate impacts or have a process for developing adaptation strategies. To advance climate adaptation for biodiversity conservation, we tested a step-by-step approach to developing adaptation strategies with 20 projects from diverse geographies. Project teams assessed likely climate impacts using historical climate data, future climate predictions, expert input, and scientific literature. They then developed adaptation strategies that considered ecosystems and species of concern, project goals, climate impacts, and indicators of progress. Project teams identified 176 likely climate impacts and developed adaptation strategies to address 42 of these impacts. The most common impacts were to habitat quantity or quality, and to hydrologic regimes. Nearly half of expected impacts were temperature-mediated. Twelve projects indicated that the project focus, either focal ecosystems and species or project boundaries, need to change as a result of considering climate impacts. More than half of the adaptation strategies were resistance strategies aimed at preserving the status quo. The rest aimed to make ecosystems and species more resilient in the face of expected changes. All projects altered strategies in some way, either by adding new actions, or by adjusting existing actions. Habitat restoration and enactment of policies and regulations were the most frequently prescribed, though every adaptation strategy required a unique combination of actions. While the effectiveness of these adaptation strategies remains to be evaluated, the application of consistent guidance has yielded important early lessons about how, when, and how often conservation projects may need to be modified to adapt to climate change.", "title": "" }, { "docid": "082630a33c0cc0de0e60a549fc57d8e8", "text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.", "title": "" }, { "docid": "a0ee42eabf32de3b0307e9fbdfbaf857", "text": "To leverage modern hardware platforms to their fullest, more and more database systems embrace compilation of query plans to native code. In the research community, there is an ongoing debate about the best way to architect such query compilers. This is perceived to be a difficult task, requiring techniques fundamentally different from traditional interpreted query execution. \n We aim to contribute to this discussion by drawing attention to an old but underappreciated idea known as Futamura projections, which fundamentally link interpreters and compilers. Guided by this idea, we demonstrate that efficient query compilation can actually be very simple, using techniques that are no more difficult than writing a query interpreter in a high-level language. Moreover, we demonstrate how intricate compilation patterns that were previously used to justify multiple compiler passes can be realized in one single, straightforward, generation pass. Key examples are injection of specialized index structures, data representation changes such as string dictionaries, and various kinds of code motion to reduce the amount of work on the critical path.\n We present LB2: a high-level query compiler developed in this style that performs on par with, and sometimes beats, the best compiled query engines on the standard TPC-H benchmark.", "title": "" }, { "docid": "861f76c061b9eb52ed5033bdeb9a3ce5", "text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas", "title": "" }, { "docid": "243b03a37b5950f69ab5df937268592b", "text": "Now-a-days synthesis and characterization of silver nanoparticles (AgNPs) through biological entity is quite interesting to employ AgNPs for various biomedical applications in general and treatment of cancer in particular. This paper presents the green synthesis of AgNPs using leaf extract of Podophyllum hexandrum Royle and optimized with various parameters such as pH, temperature, reaction time, volume of extract and metal ion concentration for synthesis of AgNPs. TEM, XRD and FTIR were adopted for characterization. The synthesized nanoparticles were found to be spherical shaped with average size of 14 nm. Effects of AgNPs were analyzed against human cervical carcinoma cells by MTT Assay, quantification of ROS, RT-PCR and western blotting techniques. The overall result indicates that AgNPs can selectively inhibit the cellular mechanism of HeLa by DNA damage and caspase mediated cell death. This biological procedure for synthesis of AgNPs and selective inhibition of cancerous cells gives an alternative avenue to treat human cancer effectively.", "title": "" }, { "docid": "c2957e7378650911a09b3c605951ff38", "text": "Vehicular networking is at the corner from early research to final deployment. This phase requires more field testing and real-world experimentation. Most Field Operational Tests (FOTs) are based on proprietary commercial hardware that only allows for marginal modifications of the protocol stack. Furthermore, the roll-out of updated implementations for new or changing protocol standards often takes a prohibitively long time. We developed one of the first complete Open Source experimental and prototyping platform for vehicular networking solutions. Our system supports most of the ETSI ITS-G5 features and runs on standard Linux. New protocol features and updates could now easily be done by and shared with the vehicular networking R&D community.", "title": "" }, { "docid": "84f7b499cd608de1ee7443fcd7194f19", "text": "In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.", "title": "" } ]
scidocsrr
729d8f4ffe692fc53091e27534b97394
Effective Pattern Discovery for Text Mining
[ { "docid": "c698f7d6b487cc7c87d7ff215d7f12b2", "text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).", "title": "" }, { "docid": "ac25761de97d9aec895d1b8a92a44be3", "text": "Most research in text classification to date has used a “bag of words” representation in which each feature corresponds to a single word. This paper examines some alternative ways to represent text based on syntactic and semantic relationships between words (phrases, synonyms and hypernyms). We describe the new representations and try to justify our hypothesis that they could improve the performance of a rule-based learner. The representations are evaluated using the RIPPER learning algorithm on the Reuters-21578 and DigiTrad test corpora. On their own the new representations are not found to produce significant performance improvements. We also try combining classifiers based on different representations using a majority voting technique, and this improves performance on both test collections. In our opinion, more sophisticated Natural Language Processing techniques need to be developed before better text representations can be produced for classification.", "title": "" } ]
[ { "docid": "6c6eb7e817e210808018506953af1031", "text": "BACKGROUND\nNurses constitute the largest human resource element and have a great impact on quality of care and patient outcomes in health care organizations. The objective of this study was to examine the relationship between rewards and nurse motivation on public hospitals administrated by Addis Ababa health bureau.\n\n\nMETHODS\nA cross-sectional survey was conducted from June to December 2010 in 5 public hospitals in Addis Ababa. Among 794 nurses, 259 were selected as sample. Data was collected using self-administered questionnaire. After the data was collected, it was analysed using SPSS version 16.0 statistical software. The results were analysed in terms of descriptive statistics followed by inferential statistics on the variables.\n\n\nRESULTS\nA total of 230 questionnaires were returned from 259 questionnaires distributed to respondents. Results of the study revealed that nurses are not motivated and there is a statistical significant relationship between rewards and the nurse work motivation and a payment is the most important and more influential variable. Furthermore, there is significant difference in nurse work motivation based on age, educational qualification and work experience while there is no significant difference in nurse work motivation based on gender.\n\n\nCONCLUSION\nThe study shows that nurses are less motivated by rewards they received while rewards have significant and positive contribution for nurse motivation. Therefore, both hospital administrators' and Addis Ababa health bureau should revise the existing nurse motivation strategy.", "title": "" }, { "docid": "dd14f9eb9a9e0e4e0d24527cf80d04f4", "text": "The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.", "title": "" }, { "docid": "2ce4d585edd54cede6172f74cf9ab8bb", "text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.", "title": "" }, { "docid": "123b35d403447a29eaf509fa707eddaa", "text": "Technology is the vital criteria to boosting the quality of life for everyone from new-borns to senior citizens. Thus, any technology to enhance the quality of life society has a value that is priceless. Nowadays Smart Wearable Technology (SWTs) innovation has been coming up to different sectors and is gaining momentum to be implemented in everyday objects. The successful adoption of SWTs by consumers will allow the production of new generations of innovative and high value-added products. The study attempts to predict the dynamics that play a role in the process through which consumers accept wearable technology. The research build an integrated model based on UTAUT2 and some external variables in order to investigate the direct and moderating effects of human expectation and behaviour on the awareness and adoption of smart products such as watch and wristband fitness. Survey will be chosen in order to test our model based on consumers. In addition, our study focus on different rate of adoption and expectation differences between early adopters and early majority in order to explore those differences and propose techniques to successfully cross the chasm between these two groups according to “Chasm theory”. For this aim and due to lack of prior research, Semi-structured focus groups will be used to obtain qualitative data for our research. Originality/value: To date, a few research exists addressing the adoption of smart wearable technologies. Therefore, the examination of consumers behaviour towards SWTs may provide orientations into the future that are useful for managers who can monitor how consumers make choices, how manufacturers should design successful market strategies, and how regulators can proscribe manipulative behaviour in this industry.", "title": "" }, { "docid": "15f51cbbb75d236a5669f613855312e0", "text": "The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.", "title": "" }, { "docid": "b9d78f22647d00aab0a79aa0c5dacdcf", "text": "Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input z to a sample x that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input y′ to a sample x. Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts.", "title": "" }, { "docid": "0a09f894029a0b8730918c14906dca9e", "text": "In the last few years, machine learning has become a very popular tool for analyzing financial text data, with many promising results in stock price forecasting from financial news, a development with implications for the E cient Markets Hypothesis (EMH) that underpins much economic theory. In this work, we explore recurrent neural networks with character-level language model pre-training for both intraday and interday stock market forecasting. In terms of predicting directional changes in the Standard & Poor’s 500 index, both for individual companies and the overall index, we show that this technique is competitive with other state-of-the-art approaches.", "title": "" }, { "docid": "115ed03ccee62fafc1606e6f6fdba1ce", "text": "High voltage SF6 circuit breaker must meet the breaking requirement for large short-circuit current, and ensure absence of breakdown after breaking small current. A 126kV high voltage SF6 circuit breaker was used as the research object in this paper. Based on the calculation results of non-equilibrium arc plasma material parameters, the distribution of pressure, temperature and density were calculated during the breaking progress. The electric field distribution was calculated in the course of flow movement, considering the influence of space charge on dielectric voltage. The change rule of the dielectric recovery progress was given based on the stream theory. The dynamic breakdown test circuit was built to measure the values of breakdown voltage under different open distance. The simulation results and experimental data are analyzed and the results show that: 1) Dielectric recovery speed (175kV/ms) is significantly faster than the voltage recovery rate (37.7kV/ms) during the arc extinguishing process. 2) The shorter the small current arcing time, the smaller the breakdown margin, so it is necessary to keep the arcing time longer than 0.5ms to ensure a large breakdown margin. 3) The calculated results are in good agreement with the experimental results. Since the breakdown voltage is less than the TRV in some test points, restrike may occur within 0.5ms after breaking, so arc extinguishment should be avoid in this time range.", "title": "" }, { "docid": "464065569c6540ac0c4fde8a1f72105d", "text": "Semantic role labeling (SRL) is a method for the semantic analysis of texts that adds a level of semantic abstraction on top of syntactic analysis, for instance adding semantic role labels like Agent on top of syntactic functions like Subject . SRL has been shown to benefit various natural language processing applications such as question answering, information extraction, and summarization. Automatic SRL systems are typically based on a predefined model of semantic predicate argument structure incorporated in lexical knowledge bases like PropBank or FrameNet. They are trained using supervised or semi-supervisedmachine learningmethods using training data labeled with predicate (word sense) and role labels. Even state-of-the-art systems based on deep learning still rely on a labeled training set. However, despite the success in an experimental setting, the real-world application of SRL methods is still prohibited by severe coverage problems (lexicon coverage problem) and lack of domain-relevant training data for training supervised systems (domain adaptation problem). These issues apply to English, but are even more severe for other languages, for which only small resources exist. The goal of this thesis is to develop knowledge-based methods to improve lexicon coverage and training data coverage for SRL. We use linked lexical knowledge bases to extend the lexicon coverage and as a basis for automatic training data generation across languages and domains. Links between lexical resources have already been previously used to address this problem, but the linkings have not been explored and applied at a large scale and the resulting generated training data only contained predicate (word sense) labels, but no role labels. To create predicate and role labels, corpus-based methods have been used. These rely on the existence of labeled training data as sources for label transfer to unlabeled corpora. For certain languages, like German or Spanish, several lexical knowledge bases, but only small amounts of labeled training data exist. For such languages, knowledge-based methods promise greater improvements. In our experiments, we target FrameNet, a lexical-semantic resource with a strong focus on semantic abstraction and generalization, but the methods developed in this thesis can be extended to other models of predicate argument structure, like VerbNet and PropBank. This", "title": "" }, { "docid": "69624e1501b897bf1a9f9a5a84132da3", "text": "360° videos and Head-Mounted Displays (HMDs) are geŠing increasingly popular. However, streaming 360° videos to HMDs is challenging. Œis is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra e‚orts to align the content and sensor data using the timestamps in the raw log €les. Œe resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hŠp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming", "title": "" }, { "docid": "74ce3b76d697d59df0c5d3f84719abb8", "text": "Existing Byzantine fault tolerance (BFT) protocols face significant challenges in the consortium blockchain scenario. On the one hand, we can make little assumptions about the reliability and security of the underlying Internet. On the other hand, the applications on consortium blockchains demand a system as scalable as the Bitcoin but providing much higher performance, as well as provable safety. We present a new BFT protocol, Gosig, that combines crypto-based secret leader selection and multi-round voting in the protocol layer with implementation layer optimizations such as gossip-based message propagation. In particular, Gosig guarantees safety even in a network fully controlled by adversaries, while providing provable liveness with easy-to-achieve network connectivity assumption. On a wide area testbed consisting of 140 Amazon EC2 servers spanning 14 cities on five continents, we show that Gosig can achieve over 4,000 transactions per second with less than 1 minute transaction confirmation time.", "title": "" }, { "docid": "e911045eb1c6469fdaa38102901f104f", "text": "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. A network based on our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 — it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ∼10%. Code and models will be made publicly available.", "title": "" }, { "docid": "7b7f5a18bb7629c48c9fbe9475aa0f0c", "text": "These are the notes for my quarter-long course on basic stability theory at UCLA (MATH 285D, Winter 2015). The presentation highlights some relations to set theory and cardinal arithmetic reflecting my impression about the tastes of the audience. We develop the general theory of local stability instead of specializing to the finite rank case, and touch on some generalizations of stability such as NIP and simplicity. The material in this notes is based on [Pil02, Pil96], [vdD05], [TZ12], [Cas11a, Cas07], [Sim15], [Poi01] and [Che12]. I would also like to thank the following people for their comments and suggestions: Tyler Arant, Madeline Barnicle, Allen Gehret, Omer Ben Neria, Anton Bobkov, Jesse Han, Pietro Kreitlon Carolino, Andrew Marks, Alex Mennen, Assaf Shani, John Susice, Spencer Unger. Comments and corrections are very welcome (chernikov@math.ucla.edu, http://www.math.ucla.edu/~chernikov/).", "title": "" }, { "docid": "8b91d7299926329623e528b52880a17f", "text": "The main objective of this paper is to enhance the university's monitoring system taking into account factors such as reliability, time saving, and easy control. The proposed system consists of a mobile RFID solution in a logical context. The system prototype and its small scale application was a complete success. However, the more practical phase will not be immediately ready because a large setup is required and a part of the existing system has to be completely disabled. Some software modifications in the RFID system can be easily done in order for the system to be ready for a new application. In this paper, advantages and disadvantages of the proposed RFID system will be presented.", "title": "" }, { "docid": "d65aa05f6eb97907fe436ff50628a916", "text": "The process of stool transfer from healthy donors to the sick, known as faecal microbiota transplantation (FMT), has an ancient history. However, only recently researchers started investigating its applications in an evidence-based manner. Current knowledge of the microbiome, the concept of dysbiosis and results of preliminary research suggest that there is an association between gastrointestinal bacterial disruption and certain disorders. Researchers have studied the effects of FMT on various gastrointestinal and non-gastrointestinal diseases, but have been unable to precisely pinpoint specific bacterial strains responsible for the observed clinical improvement or futility of the process. The strongest available data support the efficacy of FMT in the treatment of recurrent Clostridium difficile infection with cure rates reported as high as 90% in clinical trials. The use of FMT in other conditions including inflammatory bowel disease, functional gastrointestinal disorders, obesity and metabolic syndrome is still controversial. Results from clinical studies are conflicting, which reflects the gap in our knowledge of the microbiome composition and function, and highlights the need for a more defined and personalised microbial isolation and transfer.", "title": "" }, { "docid": "5ea7ad08d686ab5fbfebc9717b39895d", "text": "Most deep reinforcement and imitation learning methods are data-driven and do not utilize the underlying problem structure. While these methods have achieved great success on many challenging tasks, several key problems such as generalization, data efficiency, compositionality etc. remain open. Utilizing problem structure in the form of architecture design, priors, structured losses, domain knowledge etc. may be a viable strategy to solve some of these problems. In this thesis, we present two approaches towards integrating problem structure with deep reinforcement and imitation learning methods. In the first part of the thesis, we consider reinforcement learning problems where parameters of the model vary with its phase while the agent attempts to learn through its interactions with the environment. We propose phase-parameterized policies and value function approximators which explicitly enforce a phase structure to the policy or value space to better model such environments. We apply our phase-parameterized reinforcement learning approach to both feed-forward and recurrent deep networks in the context of trajectory optimization and locomotion problems. Our experiments show that our proposed approach has superior modeling performance and leads to improved sample complexity when compared with traditional function approximators in cyclic and linear phase environments. In the second part of the thesis, we present a framework that incorporates structure in imitation learning by modelling the imitation of complex tasks or activities as a composition of easier subtasks. We propose a new algorithm based on the Generative Adversarial Imitation Learning (GAIL) framework which automatically learns sub-task policies from unsegmented demonstrations. Our approach leverages the idea of directed or causal information to segment demonstrations of complex tasks into simpler sub-tasks and learn sub-task policies that can then be composed together to perform complicated activities. We thus call our approach Directed-Information GAIL. We experiment with both discrete and continuous state-action environments and show that our proposed approach is able to find meaningful sub-tasks from unsegmented trajectories which are then be combined to perform more complicated tasks.", "title": "" }, { "docid": "0686319ad678ff3e645b423f090c74de", "text": "We consider the challenging problem of entity typing over an extremely fine grained set of types, wherein a single mention or entity can have many simultaneous and often hierarchically-structured types. Despite the importance of the problem, there is a relative lack of resources in the form of fine-grained, deep type hierarchies aligned to existing knowledge bases. In response, we introduce TypeNet, a dataset of entity types consisting of over 1941 types organized in a hierarchy, obtained by manually annotating a mapping from 1081 Freebase types to WordNet. We also experiment with several models comparable to state-of-the-art systems and explore techniques to incorporate a structure loss on the hierarchy with the standard mention typing loss, as a first step towards future research on this dataset.", "title": "" }, { "docid": "da87c8385ac485fe5d2903e27803c801", "text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the polygon mesh processing. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.", "title": "" }, { "docid": "357b798f0429a29bb3210cfc3f031c3a", "text": "The Facial Action Coding System (FACS) is a widely used protocol for recognizing and labelling facial expression by describing the movement of muscles of the face. FACS is used to objectively measure the frequency and intensity of facial expressions without assigning any emotional meaning to those muscle movements. Instead FACS breaks down facial expressions into their smallest discriminable movements called Action Units. Each Action Unit creates a distinct change in facial appearance, such as an eyebrow lift or nose wrinkle. FACS coders can identify the Action Units which are present on the face when viewing still images or videos. Psychological research has used FACS to examine a variety of research questions including social-emotional development, neuropsychiatric disorders, and deception. In the course of this report we provide an overview of FACS and the Action Units, its reliability as a measure, and how it has been applied in some key areas of psychological research.", "title": "" }, { "docid": "115b89c782465a740e5e7aa2cae52669", "text": "Japan discards approximately 18 million tonnes of food annually, an amount that accounts for 40% of national food production. In recent years, a number of measures have been adopted at the institutional level to tackle this issue, showing increasing commitment of the government and other organizations. Along with the aim of environmental sustainability, food waste recycling, food loss prevention and consumer awareness raising in Japan are clearly pursuing another common objective. Although food loss and waste problems have been publicly acknowledged only very recently, strong implications arise from the economic and cultural history of the Japanese food system. Specific national concerns over food security have accompanied the formulation of current national strategies whose underlying causes and objectives add a unique facet to Japan’s efforts with respect to those of other developed countries’. Fighting Food Loss and Food Waste in Japan", "title": "" } ]
scidocsrr
3c813c21dbb065c9da5562d21be5b73b
Toxic Behaviors in Esports Games: Player Perceptions and Coping Strategies
[ { "docid": "ac46286c7d635ccdcd41358666026c12", "text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.", "title": "" }, { "docid": "3d7fabdd5f56c683de20640abccafc44", "text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.", "title": "" } ]
[ { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "9f5b61ad41dceff67ab328791ed64630", "text": "In this paper we present a resource-adaptive framework for real-time vision-aided inertial navigation. Specifically, we focus on the problem of visual-inertial odometry (VIO), in which the objective is to track the motion of a mobile platform in an unknown environment. Our primary interest is navigation using miniature devices with limited computational resources, similar for example to a mobile phone. Our proposed estimation framework consists of two main components: (i) a hybrid EKF estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-based SLAM, and (ii) an adaptive image-processing module that adjusts the number of detected image features based oadaptive image-processing module that adjusts the number of detected image features based on the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework isn the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework is capable of real-time processing of image and inertial data on the processor of a mobile phone.", "title": "" }, { "docid": "6779d20fd95ff4525404bdd4d3c7df4b", "text": "A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type-related degradations are addressed. Two new algorithms are applied to determine a local threshold for each pixel. The performance evaluation of the algorithm utilizes test images with ground-truth, evaluation metrics for binarization of textual and synthetic images, and a weight-based ranking procedure for the \"nal result presentation. The proposed algorithms were tested with images including di!erent types of document components and degradations. The results were compared with a number of known techniques in the literature. The benchmarking results show that the method adapts and performs well in each case qualitatively and quantitatively. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "1dc7b9dc4f135625e2680dcde8c9e506", "text": "This paper empirically analyzes di erent e ects of advertising in a nondurable, experience good market. A dynamic learning model of consumer behavior is presented in which we allow both \\informative\" e ects of advertising and \\prestige\" or \\image\" e ects of advertising. This learning model is estimated using consumer level panel data tracking grocery purchases and advertising exposures over time. Empirical results suggest that in this data, advertising's primary e ect was that of informing consumers. The estimates are used to quantify the value of this information to consumers and evaluate welfare implications of an alternative advertising regulatory regime. JEL Classi cations: D12, M37, D83 ' Economics Dept., Boston University, Boston, MA 02115 (ackerber@bu.edu). This paper is a revised version of the second and third chapters of my doctoral dissertation at Yale University. Many thanks to my advisors: Steve Berry and Ariel Pakes, as well as Lanier Benkard, Russell Cooper, Gautam Gowrisankaran, Sam Kortum, Mike Riordan, John Rust, Roni Shachar, and many seminar participants, including most recently those at the NBER 1997Winter IO meetings, for advice and comments. I thank the Yale School of Management for gratefully providing the data used in this study. Financial support from the Cowles Foundation in the form of the Arvid Anderson Dissertation Fellowship is acknowledged and appreciated. All remaining errors in this paper are my own.", "title": "" }, { "docid": "f26680bb9306ca413d0fd36efa406107", "text": "Frequency-domain concepts and terminology are commonly used to describe antennas. These are very satisfactory for a CW or narrowband application. However, their validity is questionable for an instantaneous wideband excitation. Time-domain and/or wideband analyses can provide more insight and more effective terminology. Two approaches for this time-domain analysis have been described. The more complete one uses the transfer function, a function which describes the amplitude and phase of the response over the entire frequency spectrum. While this is useful for evaluating the overall response of a system, it may not be practical when trying to characterize an antenna's performance, and trying to compare it with that of other antennas. A more convenient and descriptive approach uses time-domain parameters, such as efficiency, energy pattern, receiving area, etc., with the constraint that the reference or excitation signal is known. The utility of both approaches, for describing the time-domain performance, was demonstrated for antennas which are both small and large, in comparison to the length of the reference signal. The approaches have also been used for other antennas, such as arrays, where they also could be applied to measure the effects of mutual impedance, for a wide-bandwidth signal. The time-domain ground-plane antenna range, on which these measurements were made, is suitable for symmetric antennas. However, the approach can be readily adapted to asymmetric antennas, without a ground plane, by using suitable reference antennas.<<ETX>>", "title": "" }, { "docid": "c8b57dc6e3ef7c6b8712733ec6177275", "text": "A student information system provides a simple interface for the easy collation and maintenance of all manner of student information. The creation and management of accurate, up-to-date information regarding students' academic careers is critical students and for the faculties and administration ofSebha University in Libya and for any other educational institution. A student information system deals with all kinds of data from enrollment to graduation, including program of study, attendance record, payment of fees and examination results to name but a few. All these dataneed to be made available through a secure, online interface embedded in auniversity's website. To lay the groundwork for such a system, first we need to build the student database to be integrated with the system. Therefore we proposed and implementedan online web-based system, which we named the student data system (SDS),to collect and correct all student data at Sebha University. The output of the system was evaluated by using a similarity (Euclidean distance) algorithm. The results showed that the new data collected by theSDS can fill the gaps and correct the errors in the old manual data records.", "title": "" }, { "docid": "7b7e41ced300aeff7916509c04c4fd6a", "text": "We present and evaluate various content-based recommendation models that make use of user and item profiles defined in terms of weighted lists of social tags. The studied approaches are adaptations of the Vector Space and Okapi BM25 information retrieval models. We empirically compare the recommenders using two datasets obtained from Delicious and Last.fm social systems, in order to analyse the performance of the approaches in scenarios with different domains and tagging behaviours.", "title": "" }, { "docid": "3763da6b72ee0a010f3803a901c9eeb2", "text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.", "title": "" }, { "docid": "aa73df5eadafff7533994c05a8d3c415", "text": "In this paper, we report on the outcomes of the European project EduWear. The aim of the project was to develop a construction kit with smart textiles and to examine its impact on young people. The construction kit, including a suitable programming environment and a workshop concept, was adopted by children in a number of workshops.\n The evaluation of the workshops showed that designing, creating, and programming wearables with a smart textile construction kit allows for creating personal meaningful projects which relate strongly to aspects of young people's life worlds. Through their construction activities, participants became more self-confident in dealing with technology and were able to draw relations between their own creations and technologies present in their environment. We argue that incorporating such constructionist processes into an appropriate workshop concept is essential for triggering thought processes about the character of digital media beyond the construction process itself.", "title": "" }, { "docid": "f119b0ee9a237ab1e9acdae19664df0f", "text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7bd3f6b7b2f79f08534b70c16be91c02", "text": "This paper describes a dual-loop delay-locked loop (DLL) which overcomes the problem of a limited delay range by using multiple voltage-controlled delay lines (VCDLs). A reference loop generates quadrature clocks, which are then delayed with controllable amounts by four VCDLs and multiplexed to generate the output clock in a main loop. This architecture enables the DLL to emulate the infinite-length VCDL with multiple finite-length VCDLs. The DLL incorporates a replica biasing circuit for low-jitter characteristics and a duty cycle corrector immune to prevalent process mismatches. A test chip has been fabricated using a 0.25m CMOS process. At 400 MHz, the peak-to-peak jitter with a quiet 2.5-V supply is 54 ps, and the supply-noise sensitivity is 0.32 ps/mV.", "title": "" }, { "docid": "b0727e320a1c532bd3ede4fd892d8d01", "text": "Semantic technologies could facilitate realizing features like interoperability and reasoning for Internet of Things (IoT). However, the dynamic and heterogeneous nature of IoT data, constrained resources, and real-time requirements set challenges for applying these technologies. In this paper, we study approaches for delivering semantic data from IoT nodes to distributed reasoning engines and reasoning over such data. We perform experiments to evaluate the scalability of these approaches and also study how reasoning is affected by different data aggregation strategies.", "title": "" }, { "docid": "5a61c356940eef5eb18c53a71befbe5b", "text": "Recently, plant construction throughout the world, including nuclear power plant construction, has grown significantly. The scale of Korea’s nuclear power plant construction in particular, has increased gradually since it won a contract for a nuclear power plant construction project in the United Arab Emirates in 2009. However, time and monetary resources have been lost in some nuclear power plant construction sites due to lack of risk management ability. The need to prevent losses at nuclear power plant construction sites has become more urgent because it demands professional skills and large-scale resources. Therefore, in this study, the Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy Process (FAHP) were applied in order to make comparisons between decision-making methods, to assess the potential risks at nuclear power plant construction sites. To suggest the appropriate choice between two decision-making methods, a survey was carried out. From the results, the importance and the priority of 24 risk factors, classified by process, cost, safety, and quality, were analyzed. The FAHP was identified as a suitable method for risk assessment of nuclear power plant construction, compared with risk assessment using the AHP. These risk factors will be able to serve as baseline data for risk management in nuclear power plant construction projects.", "title": "" }, { "docid": "d5ddc141311afb6050a58be88303b577", "text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.", "title": "" }, { "docid": "609cc8dd7323e817ddfc5314070a68bf", "text": "We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.", "title": "" }, { "docid": "7eca894697ee372abe6f67a069dcd910", "text": "Government agencies and consulting companies in charge of pavement management face the challenge of maintaining pavements in serviceable conditions throughout their life from the functional and structural standpoints. For this, the assessment and prediction of the pavement conditions are crucial. This study proposes a neuro-fuzzy model to predict the performance of flexible pavements using the parameters routinely collected by agencies to characterize the condition of an existing pavement. These parameters are generally obtained by performing falling weight deflectometer tests and monitoring the development of distresses on the pavement surface. The proposed hybrid model for predicting pavement performance was characterized by multilayer, feedforward neural networks that led the reasoning process of the IF-THEN fuzzy rules. The results of the neuro-fuzzy model were superior to those of the linear regression model in terms of accuracy in the approximation. The proposed neuro-fuzzy model showed good generalization capability, and the evaluation of the model performance produced satisfactory results, demonstrating the efficiency and potential of these new mathematical modeling techniques.", "title": "" }, { "docid": "60bdd255a19784ed2d19550222e61b69", "text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.", "title": "" }, { "docid": "255ff39001f9bbcd7b1e6fe96f588371", "text": "We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting, lattice alignment, and successive decoding.", "title": "" }, { "docid": "85b77b88c2a06603267b770dbad8ec73", "text": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.", "title": "" }, { "docid": "a9b366b2b127b093b547f8a10ac05ca5", "text": "Each user session in an e-commerce system can be modeled as a sequence of web pages, indicating how the user interacts with the system and makes his/her purchase. A typical recommendation approach, e.g., Collaborative Filtering, generates its results at the beginning of each session, listing the most likely purchased items. However, such approach fails to exploit current viewing history of the user and hence, is unable to provide a real-time customized recommendation service. In this paper, we build a deep recurrent neural network to address the problem. The network tracks how users browse the website using multiple hidden layers. Each hidden layer models how the combinations of webpages are accessed and in what order. To reduce the processing cost, the network only records a finite number of states, while the old states collapse into a single history state. Our model refreshes the recommendation result each time when user opens a new web page. As user's session continues, the recommendation result is gradually refined. Furthermore, we integrate the recurrent neural network with a Feedfoward network which represents the user-item correlations to increase the prediction accuracy. Our approach has been applied to Kaola (http://www.kaola.com), an e-commerce website powered by the NetEase technologies. It shows a significant improvement over previous recommendation service.", "title": "" } ]
scidocsrr
0fc77774d6717d8bfe84af7a5549a2b9
MLPnP - A Real-Time Maximum Likelihood Solution to the Perspective-n-Point Problem
[ { "docid": "4538e6bde228b0423f1ec2a41feae17c", "text": "In this paper, we revisit the classical perspective-n-point (PnP) problem, and propose the first non-iterative O(n) solution that is fast, generally applicable and globally optimal. Our basic idea is to formulate the PnP problem into a functional minimization problem and retrieve all its stationary points by using the Gr\"obner basis technique. The novelty lies in a non-unit quaternion representation to parameterize the rotation and a simple but elegant formulation of the PnP problem into an unconstrained optimization problem. Interestingly, the polynomial system arising from its first-order optimality condition assumes two-fold symmetry, a nice property that can be utilized to improve speed and numerical stability of a Grobner basis solver. Experiment results have demonstrated that, in terms of accuracy, our proposed solution is definitely better than the state-of-the-art O(n) methods, and even comparable with the reprojection error minimization method.", "title": "" } ]
[ { "docid": "e64608f39ab082982178ad2b3539890f", "text": "Hoeschele, Michael David. M.S., Purdue University, May, 2006, Detecting Social Engineering. Major Professor: Marcus K. Rogers. This study consisted of creating and evaluating a proof of concept model of the Social Engineering Defense Architecture (SEDA) as theoretically proposed by Hoeschele and Rogers (2005). The SEDA is a potential solution to the problem of Social Engineering (SE) attacks perpetrated over the phone lines. The proof of concept model implemented some simple attack detection processes and the database to store all gathered information. The model was tested by generating benign telephone conversations in addition to conversations that include Social Engineering (SE) attacks. The conversations were then processed by the model to determine its accuracy to detect attacks. The model was able to detect all attacks and to store all of the correct data in the database, resulting in 100% accuracy.", "title": "" }, { "docid": "0f89f98d8db9667e24f23466c2e37d8a", "text": "With the increase in the elderly, stroke has become a common disease, often leading to motor dysfunction and even permanent disability. Lower-limb rehabilitation robots can help patients to carry out reasonable and effective training to improve the motor function of paralyzed extremity. In this paper, the developments of lower-limb rehabilitation robots in the past decades are reviewed. Specifically, we provide a classification, a comparison, and a design overview of the driving modes, training paradigm, and control strategy of the lower-limb rehabilitation robots in the reviewed literature. A brief review on the gait detection technology of lower-limb rehabilitation robots is also presented. Finally, we discuss the future directions of the lower-limb rehabilitation robots.", "title": "" }, { "docid": "77320edf2d8da853b873c71e26802c6e", "text": "Content Delivery Network (CDN) services largely affect the delivery quality perceived by users. While those services were initially offered by independent entities, some large ISP now develop their own CDN activities to control costs and delivery quality. But this new activity is also a new source of revenues for those vertically integrated ISP-CDNs, which can sell those services to content providers. In this paper, we investigate the impact of having an ISP and a vertically-integrated CDN, on the main actors of the ecosystem (users, competing ISPs). Our approach is based on an economic model of revenues and costs, and a multilevel game-theoretic formulation of the interactions among actors. Our model incorporates the possibility for the vertically-integrated ISP to partially offer CDN services to competitors in order to optimize the trade-off between CDN revenue (if fully offered) and competitive advantage on subscriptions at the ISP level (if not offered to competitors). Our results highlight two counterintuitive phenomena: an ISP may prefer an independent CDN over controlling (integrating) a CDN, and from the user point of view vertical integration is preferable to an independent CDN or a no-CDN configuration. Hence, a regulator may want to elicit such CDN-ISP vertical integrations rather than prevent them.", "title": "" }, { "docid": "8d98529cd3fc92eba091e09ea223df4e", "text": "Exploring small connected and induced subgraph patterns (CIS patterns, or graphlets) has recently attracted considerable attention. Despite recent efforts on computing the number of instances a specific graphlet appears in a large graph (i.e., the total number of CISes isomorphic to the graphlet), little attention has been paid to characterizing a node’s graphlet degree, i.e., the number of CISes isomorphic to the graphlet that include the node, which is an important metric for analyzing complex networks such as social and biological networks. Similar to global graphlet counting, it is challenging to compute node graphlet degrees for a large graph due to the combinatorial nature of the problem. Unfortunately, previous methods of computing global graphlet counts are not suited to solve this problem. In this paper we propose sampling methods to estimate node graphlet degrees for undirected and directed graphs, and analyze the error of our estimates. To the best of our knowledge, we are the first to study this problem and give a fast scalable solution. We conduct experiments on a variety of real-word datasets that demonstrate that our methods accurately and efficiently estimate node graphlet degrees for graphs with millions of edges.", "title": "" }, { "docid": "8b3042021e48c86873e00d646f65b052", "text": "We derive a numerical method for Darcy flow, hence also for Poisson’s equation in first order form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is its discretization on simplicial complexes such as triangle and tetrahedral meshes. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. Our method requires the use of meshes in which each simplex contains its circumcenter. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solution in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this paper. We also include a discussion of the boundary condition in terms of exterior calculus.", "title": "" }, { "docid": "152fc0018ecb2d6d2b69e2a2e2eb6ef9", "text": "This paper examines the relationship between low interests maintained by advanced economy central banks and credit booms in emerging economies. In a model with crossborder banking, low funding rates increase credit supply, but the initial shock is amplified through the “risk-taking channel” of monetary policy where greater risk-taking interact with dampened measured risks that are driven by currency appreciation to create a feedback loop. In an empirical investigation using VAR analysis, we find that expectations of lower short-term rates dampens measured risks and stimulate cross-border banking sector capital flows. JEL Codes: F32, F33, F34", "title": "" }, { "docid": "09062173db6b5f5190ab7c8f7f6ce6fd", "text": "This paper presents component techniques essential for converting executables to a high-level intermediate representation (IR) of an existing compiler. The compiler IR is then employed for three distinct applications: binary rewriting using the compiler's binary back-end, vulnerability detection using source-level symbolic execution, and source-code recovery using the compiler's C backend. Our techniques enable complex high-level transformations not possible in existing binary systems, address a major challenge of input-derived memory addresses in symbolic execution and are the first to enable recovery of a fully functional source-code.\n We present techniques to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable (with a stack pointer) to an abstract stack (without a stack pointer). Our methods do not use symbolic, relocation, or debug information since these are usually absent in deployed executables.\n We have integrated our techniques with a prototype x86 binary framework called SecondWrite that uses LLVM as IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, produced by two different compilers (gcc and Microsoft Visual Studio compiler), three languages (C, C++, and Fortran), two operating systems (Windows and Linux) and a real world program (Apache server).", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "de9e0e080ec3210d771bfffb426e0245", "text": "PURPOSE\nTo compare adults who stutter with and without support group experience on measures of self-esteem, self-efficacy, life satisfaction, self-stigma, perceived stuttering severity, perceived origin and future course of stuttering, and importance of fluency.\n\n\nMETHOD\nParticipants were 279 adults who stutter recruited from the National Stuttering Association and Board Recognized Specialists in Fluency Disorders. Participants completed a Web-based survey comprised of various measures of well-being including the Rosenberg Self-Esteem Scale, Generalized Self-Efficacy Scale, Satisfaction with Life Scale, a measure of perceived stuttering severity, the Self-Stigma of Stuttering Scale, and other stuttering-related questions.\n\n\nRESULTS\nParticipants with support group experience as a whole demonstrated lower internalized stigma, were more likely to believe that they would stutter for the rest of their lives, and less likely to perceive production of fluent speech as being highly or moderately important when talking to other people, compared to participants with no support group experience. Individuals who joined support groups to help others feel better about themselves reported higher self-esteem, self-efficacy, and life satisfaction, and lower internalized stigma and perceived stuttering severity, compared to participants with no support group experience. Participants who stutter as an overall group demonstrated similar levels of self-esteem, higher self-efficacy, and lower life satisfaction compared to averages from normative data for adults who do not stutter.\n\n\nCONCLUSIONS\nFindings support the notion that self-help support groups limit internalization of negative attitudes about the self, and that focusing on helping others feel better in a support group context is linked to higher levels of psychological well-being.\n\n\nEDUCATIONAL OBJECTIVES\nAt the end of this activity the reader will be able to: (a) describe the potential psychological benefits of stuttering self-help support groups for people who stutter, (b) contrast between important aspects of well-being including self-esteem self-efficacy, and life satisfaction, (c) summarize differences in self-esteem, self-efficacy, life satisfaction, self-stigma, perceived stuttering severity, and perceptions of stuttering between adults who stutter with and without support group experience, (d) summarize differences in self-esteem, self-efficacy, and life satisfaction between adults who stutter and normative data for adults who do not stutter.", "title": "" }, { "docid": "da629f12846e3b2398624ec6a44d24de", "text": "We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probabilities of parser actions. Unlike a similar previous generative model (Henderson and Titov, 2010), the RNN is trained discriminatively to optimize a fast beam search. This beam search prunes after each shift action, so we add a correctness probability to each shift action and train this score to discriminate between correct and incorrect sequences of parser actions. We also speed up parsing time by caching computations for frequent feature combinations, including during training, giving us both faster training and a form of backoff smoothing. The resulting parser is over 35 times faster than its generative counterpart with nearly the same accuracy, producing state-of-art dependency parsing results while requiring minimal feature engineering. YAZDANI, Majid, HENDERSON, James. Incremental Recurrent Neural Network Dependency Parser with Search-based Discriminative Training. In: Proceedings of the 19th Conference on Computational Language Learning. 2015. p. 142-152", "title": "" }, { "docid": "b31aaa6805524495f57a2f54d0dd86f1", "text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?", "title": "" }, { "docid": "c63ce594f3e940783ae24494a6cb1aa9", "text": "In this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed. The new system contains two parts: a deep neural network (DNN) based sequence tagging model and a deep reinforcement learning (DRL) based augmented tagger. The augmented tagger helps improve system performance by modeling the data with minority tags. The new system is evaluated on SLU and NLU sequence tagging tasks using ATIS and CoNLL2003 benchmark datasets, to demonstrate the new system’s outstanding performance on general tagging tasks. Evaluated by F1 scores, it shows that the new system outperforms the current state-of-the-art model on ATIS dataset by 1.9 % and that on CoNLL-2003 dataset by 1.4 %.", "title": "" }, { "docid": "bffcc580fa868d4c0b05742997caa55a", "text": "In this paper, we propose a probabilistic model for detecting relevant changes in registered aerial image pairs taken with the time differences of several years and in different seasonal conditions. The introduced approach, called the conditional mixed Markov model, is a combination of a mixed Markov model and a conditionally independent random field of signals. The model integrates global intensity statistics with local correlation and contrast features. A global energy optimization process ensures simultaneously optimal local feature selection and smooth observation-consistent segmentation. Validation is given on real aerial image sets provided by the Hungarian Institute of Geodesy, Cartography and Remote Sensing and Google Earth.", "title": "" }, { "docid": "610476babafbf2785ace600ed409638c", "text": "In the utility grid interconnection of photovoltaic (PV) energy sources, inverters determine the overall system performance, which result in the demand to route the grid connected transformerless PV inverters (GCTIs) for residential and commercial applications, especially due to their high efficiency, light weight, and low cost benefits. In spite of these benefits of GCTIs, leakage currents due to distributed PV module parasitic capacitances are a major issue in the interconnection, as they are undesired because of safety, reliability, protective coordination, electromagnetic compatibility, and PV module lifetime issues. This paper classifies the kW and above range power rating GCTI topologies based on their leakage current attributes and investigates and/illustrates their leakage current characteristics by making use of detailed microscopic waveforms of a representative topology of each class. The cause and quantity of leakage current for each class are identified, not only providing a good understanding, but also aiding the performance comparison and inverter design. With the leakage current characteristic investigation, the study places most topologies under small number of classes with similar leakage current attributes facilitating understanding, evaluating, and the design of GCTIs. Establishing a clear relation between the topology type and leakage current characteristic, the topology families are extended with new members, providing the design engineers a variety of GCTI topology configurations with different characteristics.", "title": "" }, { "docid": "6a282fbc6ee9baea673c2f9f15955a18", "text": "A 34-year-old woman suffered from significant chronic pain, depression, non-restorative sleep, chronic fatigue, severe morning stiffness, leg cramps, irritable bowel syndrome, hypersensitivity to cold, concentration difficulties, and forgetfulness. Blood tests were negative for rheumatic disorders. The patient was diagnosed with Fibromyalgia syndrome (FMS). Due to the lack of effectiveness of pharmacological therapies in FMS, she approached a novel metabolic proposal for the symptomatic remission. Its core idea is supporting serotonin synthesis by allowing a proper absorption of tryptophan assumed with food, while avoiding, or at least minimizing the presence of interfering non-absorbed molecules, such as fructose and sorbitol. Such a strategy resulted in a rapid improvement of symptoms after only few days on diet, up to the remission of most symptoms in 2 months. Depression, widespread chronic pain, chronic fatigue, non-restorative sleep, morning stiffness, and the majority of the comorbidities remitted. Energy and vitality were recovered by the patient as prior to the onset of the disease, reverting the occupational and social disabilities. The patient episodically challenged herself breaking the dietary protocol leading to its negative test and to the evaluation of its benefit. These breaks correlated with the recurrence of the symptoms, supporting the correctness of the biochemical hypothesis underlying the diet design toward remission of symptoms, but not as a final cure. We propose this as a low risk and accessible therapeutic protocol for the symptomatic remission in FMS with virtually no costs other than those related to vitamin and mineral salt supplements in case of deficiencies. A pilot study is required to further ground this metabolic approach, and to finally evaluate its inclusion in the guidelines for clinical management of FMS.", "title": "" }, { "docid": "e33dd9c497488747f93cfcc1aa6fee36", "text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.", "title": "" }, { "docid": "fcc76a05f8f1dd12ade56f486bca814f", "text": "In human-computer interaction, gaze orientation is an important and promising source of information to demonstrate the attention and focus of users. Gaze detection can also be an extremely useful metric for analysing human mood and affect. Furthermore, gaze can be used as an input method for human-computer interaction. However, currently real-time and accurate gaze estimation is still an open problem. In this paper, we propose a simple and novel estimation model of the real-time gaze direction of a user on a computer screen. This method utilises cheap capturing devices, a HD webcam and a Microsoft Kinect. We consider that the gaze motion from a user facing forwards is composed of the local gaze motion shifted by eye motion and the global gaze motion driven by face motion. We validate our proposed model of gaze estimation and provide experimental evaluation of the reliability and the precision of the method.", "title": "" }, { "docid": "85462fe3cf060d7fa85251d5a7d30d1a", "text": "Validity of PostureScreen Mobile® in the Measurement of Standing Posture Breanna Cristine Berry Hopkins Department of Exercise Sciences, BYU Master of Science Background: PostureScreen Mobile® is an app created to quickly screen posture using front and side-view photographs. There is currently a lack of evidence that establishes PostureScreen Mobile® (PSM) as a valid measure of posture. Therefore, the purpose of this preliminary study was to document the validity and reliability of PostureScreen Mobile® in assessing static standing posture. Methods: This study was an experimental trial in which the posture of 50 male participants was assessed a total of six times using two different methods: PostureScreen Mobile® and Vicon 3D motion analysis system (VIC). Postural deviations, as measured during six trials of PSM assessments (3 trials with and 3 trials without anatomical markers), were compared to the postural deviations as measured using the VIC as the criterion measure. Measurement of lateral displacement on the x-axis (shift) and rotation on the y-axis (tilt) were made of the head, shoulders, and hips in the frontal plane. Measurement of forward/rearward displacement on the Z-axis (shift) of the head, shoulders, hips, and knees were made in the sagittal plane. Validity was evaluated by comparing the PSM measurements of shift and tilt of each body part to that of the VIC. Reliability was evaluated by comparing the variance of PSM measurements to the variance of VIC measurements. The statistical model employed the Bayesian framework and consisted of the scaled product of the likelihood of the data given the parameters and prior probability densities for each of the parameters. Results: PSM tended to overestimate VIC postural tilt and shift measurements in the frontal plane and underestimate VIC postural shift measurements in the sagittal plane. Use of anatomical markers did not universally improve postural measurements with PSM, and in most cases, the variance of postural measurements using PSM exceeded that of VIC. The patterns in the intraclass correlation coefficients (ICC) suggest high trial-to-trial variation in posture. Conclusions: We conclude that until research further establishes the validity and reliability of the PSM app, it should not be used in research or clinical applications when accurate postural assessments are necessary or when serial measurements of posture will be performed. We suggest that the PSM be used by health and fitness professionals as a screening tool, as described by the manufacturer. Due to the suspected trial-to-trial variation in posture, we question the usefulness of a single postural assessment.", "title": "" }, { "docid": "f0af0497727f2256aa52b30c3a7f64d1", "text": "This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles' diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle's position, which enhanced the particles' capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.", "title": "" }, { "docid": "4528c64444ce7350537b34823f91744b", "text": "The anterior prefrontal cortex (APC) confers on humans the ability to simultaneously pursue several goals. How does the brain's motivational system, including the medial frontal cortex (MFC), drive the pursuit of concurrent goals? Using brain imaging, we observed that the left and right MFC, which jointly drive single-task performance according to expected rewards, divide under dual-task conditions: While the left MFC encodes the rewards driving one task, the right MFC concurrently encodes those driving the other task. The same dichotomy was observed in the lateral frontal cortex, whereas the APC combined the rewards driving both tasks. The two frontal lobes thus divide for representing simultaneously two concurrent goals coordinated by the APC. The human frontal function seems limited to driving the pursuit of two concurrent goals simultaneously.", "title": "" } ]
scidocsrr
04a5199bba708f0ac027cc8d96902ffa
3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" }, { "docid": "836402d8099846a6668272aeec9b2c9f", "text": "This paper addresses the problem of estimating the intrinsic parameters of the 3D Velodyne lidar while at the same time computing its extrinsic calibration with respect to a rigidly connected camera. Existing approaches to solve this nonlinear estimation problem are based on iterative minimization of nonlinear cost functions. In such cases, the accuracy of the resulting solution hinges on the availability of a precise initial estimate, which is often not available. In order to alleviate this issue, we divide the problem into two least-squares sub-problems, and analytically solve each one to determine a precise initial estimate for the unknown parameters. We further increase the accuracy of these initial estimates by iteratively minimizing a batch nonlinear least-squares cost function. In addition, we provide the minimal observability conditions, under which, it is possible to accurately estimate the unknown parameters. Experimental results consisting of photorealistic 3D reconstruction of indoor and outdoor scenes, as well as standard metrics of the calibration errors, are used to assess the validity of our approach.", "title": "" } ]
[ { "docid": "d1c33990b7642ea51a8a568fa348d286", "text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.", "title": "" }, { "docid": "07c5f9d76909f47aae5970d82e06e4b5", "text": "In this paper we present a novel approach to minimally supervised synonym extraction. The approach is based on the word embeddings and aims at presenting a method for synonym extraction that is extensible to various languages. We report experiments with word vectors trained by using both the continuous bag-of-words model (CBoW) and the skip-gram model (SG) investigating the effects of different settings with respect to the contextual window size, the number of dimensions and the type of word vectors. We analyze the word categories that are (cosine) similar in the vector space, showing that cosine similarity on its own is a bad indicator to determine if two words are synonymous. In this context, we propose a new measure, relative cosine similarity, for calculating similarity relative to other cosine-similar words in the corpus. We show that calculating similarity relative to other words boosts the precision of the extraction. We also experiment with combining similarity scores from differently-trained vectors and explore the advantages of using a part-of-speech tagger as a way of introducing some light supervision, thus aiding extraction. We perform both intrinsic and extrinsic evaluation on our final system: intrinsic evaluation is carried out manually by two human evaluators and we use the output of our system in a machine translation task for extrinsic evaluation, showing that the extracted synonyms improve the evaluation metric. ©2016 PBML. Distributed under CC BY-NC-ND. Corresp. author: tuur.leeuwenberg@cs.kuleuven.be Cite as: Artuur Leeuwenberg, Mihaela Vela, Jon Dehdari, Josef van Genabith. A Minimally Supervised Approach for Synonym Extraction with Word Embeddings. The Prague Bulletin of Mathematical Linguistics No. 105, 2016, pp. 111–142. doi: 10.1515/pralin-2016-0006.", "title": "" }, { "docid": "2efe5c0228e6325cdbb8e0922c19924f", "text": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes that can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across simulation models, particularly in cases where only a small number of patients have high quality phenotype. This situation is commonly encountered in research with EHRs. Denoising autoencoders perform dimensionality reduction allowing visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs.", "title": "" }, { "docid": "d7f743ddff9863b046ab91304b37a667", "text": "In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramér-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization.", "title": "" }, { "docid": "30e229f91456c3d7eb108032b3470b41", "text": "Software as a service (SaaS) is a rapidly growing model of software licensing. In contrast to traditional software where users buy a perpetual-use license, SaaS users buy a subscription from the publisher. Whereas traditional software publishers typically release new product features as part of new versions of software once in a few years, publishers using SaaS have an incentive to release new features as soon as they are completed. We show that this property of the SaaS licensing model leads to greater investment in product development under most conditions. This increased investment leads to higher software quality in equilibrium under SaaS compared to perpetual licensing. The software publisher earns greater profits under SaaS while social welfare is also higher", "title": "" }, { "docid": "e7f7bc87930407c02b082fee74a8e1a5", "text": "We thoroughly and critically review studies reporting the real (refractive index) and imaginary (absorption index) parts of the complex refractive index of silica glass over the spectral range from 30 nm to 1000 microm. The general features of the optical constants over the electromagnetic spectrum are relatively consistent throughout the literature. In particular, silica glass is effectively opaque for wavelengths shorter than 200 nm and larger than 3.5-4.0 microm. Strong absorption bands are observed (i) below 160 nm due to the interaction with electrons, absorption by impurities, and the presence of OH groups and point defects; (ii) at aproximately 2.73-2.85, 3.5, and 4.3 microm also caused by OH groups; and (iii) at aproximately 9-9.5, 12.5, and 21-23 microm due to Si-O-Si resonance modes of vibration. However, the actual values of the refractive and absorption indices can vary significantly due to the glass manufacturing process, crystallinity, wavelength, and temperature and to the presence of impurities, point defects, inclusions, and bubbles, as well as to the experimental uncertainties and approximations in the retrieval methods. Moreover, new formulas providing comprehensive approximations of the optical properties of silica glass are proposed between 7 and 50 microm. These formulas are consistent with experimental data and substantially extend the spectral range of 0.21-7 microm covered by existing formulas and can be used in various engineering applications.", "title": "" }, { "docid": "23186cb9f2869e5ba09700b2b9f07c0f", "text": "Facility Layout Problem (FLP) is logic based combinatorial optimization problem. It is a meta-heuristic solution approach that gained significant attention to obtained optimal facility layout. This paper examines the convergence analysis by changing the crossover and mutation probability in an optimal facility layout. This algorithm is based on appropriate techniques that include multipoint swapped crossover and swap mutation operators. Two test cases were used for the implementations of the said technique and evaluate the robustness of the proposed method compared to other approaches in the literature. Keywords—facility layout problem, genetic algorithm, material handling cost, meta-heuristics", "title": "" }, { "docid": "e381b56801a0cb8a2dc0e9bc3346f68f", "text": "We have designed and presented a wireless sensor network monitoring and control system for aquaculture. The system can detect and control water quality parameters of temperature, dissolved oxygen content, pH value, and water level in real-time. The sensor nodes collect the water quality parameters and transmit them to the base station host computer through ZigBee wireless communication standard. The host computer is used for data analysis, processing and presentation using LabVIEW software platform. The water quality parameters will be sent to owners through short messages from the base station via the Global System for Mobile (GSM) module for notification. The experimental evaluation of the network performance metrics of quality of communication link, battery performance and data aggregation was presented. The experimental results show that the system has great prospect and can be used to operate in real world environment for optimum control of aquaculture", "title": "" }, { "docid": "4afa66aeaf18fae2b29a0d4c855746dd", "text": "In this work, we propose a technique that utilizes a fully convolutional network (FCN) to localize image splicing attacks. We first evaluated a single-task FCN (SFCN) trained only on the surface label. Although the SFCN is shown to provide superior performance over existing methods, it still provides a coarse localization output in certain cases. Therefore, we propose the use of a multi-task FCN (MFCN) that utilizes two output branches for multi-task learning. One branch is used to learn the surface label, while the other branch is used to learn the edge or boundary of the spliced region. We trained the networks using the CASIA v2.0 dataset, and tested the trained models on the CASIA v1.0, Columbia Uncompressed, Carvalho, and the DARPA/NIST Nimble Challenge 2016 SCI datasets. Experiments show that the SFCN and MFCN outperform existing splicing localization algorithms, and that the MFCN can achieve finer localization than the SFCN.", "title": "" }, { "docid": "680523e1eaa7abb7556655313875d353", "text": "Our aim in this paper is to clarify the range of motivations that have inspired the development of computer programs for the composition of music. We consider this to be important since different methodologies are appropriate for different motivations and goals. We argue that a widespread failure to specify the motivations and goals involved has lead to a methodological malaise in music related research. A brief consideration of some of the earliest attempts to produce computational systems for the composition of music leads us to identify four activities involving the development of computer programs which compose music each of which is inspired by different practical or theoretical motivations. These activities are algorithmic composition, the design of compositional tools, the computational modelling of musical styles and the computational modelling of music cognition. We consider these four motivations in turn, illustrating the problems that have arisen from failing to distinguish between them. We propose a terminology that clearly differentiates the activities defined by the four motivations and present methodological suggestions for research in each domain. While it is clearly important for researchers to embrace developments in related disciplines, we argue that research in the four domains will continue to stagnate unless the motivations and aims of research projects are clearly stated and appropriate methodologies are adopted for developing and evaluating systems that compose music.", "title": "" }, { "docid": "304f4e48ac5d5698f559ae504fc825d9", "text": "How the circadian clock regulates the timing of sleep is poorly understood. Here, we identify a Drosophila mutant, wide awake (wake), that exhibits a marked delay in sleep onset at dusk. Loss of WAKE in a set of arousal-promoting clock neurons, the large ventrolateral neurons (l-LNvs), impairs sleep onset. WAKE levels cycle, peaking near dusk, and the expression of WAKE in l-LNvs is Clock dependent. Strikingly, Clock and cycle mutants also exhibit a profound delay in sleep onset, which can be rescued by restoring WAKE expression in LNvs. WAKE interacts with the GABAA receptor Resistant to Dieldrin (RDL), upregulating its levels and promoting its localization to the plasma membrane. In wake mutant l-LNvs, GABA sensitivity is decreased and excitability is increased at dusk. We propose that WAKE acts as a clock output molecule specifically for sleep, inhibiting LNvs at dusk to promote the transition from wake to sleep.", "title": "" }, { "docid": "05307b60bd185391919ea7c1bf1ce0ec", "text": "Trace-level reuse is based on the observation that some traces (dynamic sequences of instructions) are frequently repeated during the execution of a program, and in many cases, the instructions that make up such traces have the same source operand values. The execution of such traces will obviously produce the same outcome and thus, their execution can be skipped if the processor records the outcome of previous executions. This paper presents an analysis of the performance potential of trace-level reuse and discusses a preliminary realistic implementation. Like instruction-level reuse, trace-level reuse can improve performance by decreasing resource contention and the latency of some instructions. However, we show that tracelevel reuse is more effective than instruction-level reuse because the former can avoid fetching the instructions of reused traces. This has two important benefits: it reduces the fetch bandwidth requirements, and it increases the effective instruction window size since these instructions do not occupy window entries. Moreover, trace-level reuse can compute all at once the result of a chain of dependent instructions, which may allow the processor to avoid the serialization caused by data dependences and thus, to potentially exceed the dataflow limit.", "title": "" }, { "docid": "6561b240817d9e82d7da51bfd3a58546", "text": "Vehicle safety is increasingly becoming a concern. Whether the driver is wearing a seatbelt and whether the vehicle is speeding out or not become important indicators of the vehicle safety. However, manually searching, detecting, recording and other work will spend a lot of manpower and time inefficiently. This paper proposes a cascade Adaboost classifier based seatbelt detection system to detect the vehicle windows, to complete Canny edge detection on gradient map of vehicle window images, and to perform the probabilistic Hough transform to extract the straight-lines of seatbelts. The system achieves the goal of seatbelt detection intelligently.", "title": "" }, { "docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd", "text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19", "title": "" }, { "docid": "ff5d1ace34029619d79342e5fe63e0b7", "text": "In this paper, Proposes SIW slot antenna backed with a cavity for 57-64 GHz frequency. This frequency is used for wireless communication applications. The proposed antenna is designed by using Rogers substrate with dielectric constant of 2.2, substrate thickness is 0.381 mm and the microstrip feed is used with the input impedance of 50ohms. The structure provides 5.2GHz impedance bandwidth with a range of 57.8 to 64 GHz and matches with VSWR 2:1. The values of reflection coefficient, VSWR, gain, transmission efficiency and radiation efficiency of proposed antenna at 60GHz are −17.32dB, 1.3318, 7.19dBi, 79.5% and 89.5%.", "title": "" }, { "docid": "c74c73965123e09bfbaef3e9793c38e0", "text": "We propose a one-class neural network (OC-NN) model to detect anomalies in complex data sets. OC-NN combines the ability of deep networks to extract progressively rich representation of data with the one-class objective of creating a tight envelope around normal data. The OC-NN approach breaks new ground for the following crucial reason: data representation in the hidden layer is driven by the OC-NN objective and is thus customized for anomaly detection. This is a departure from other approaches which use a hybrid approach of learning deep features using an autoencoder and then feeding the features into a separate anomaly detection method like one-class SVM (OC-SVM). The hybrid OC-SVM approach is sub-optimal because it is unable to influence representational learning in the hidden layers. A comprehensive set of experiments demonstrate that on complex data sets (like CIFAR and GTSRB), OC-NN performs on par with state-of-the-art methods and outperformed conventional shallow methods in some scenarios.", "title": "" }, { "docid": "8f5ca5819dd28c686da78332add76fb0", "text": "The emerging Service-Oriented Computing (SOC) paradigm promises to enable businesses and organizations to collaborate in an unprecedented way by means of standard web services. To support rapid and dynamic composition of services in this paradigm, web services that meet requesters' functional requirements must be able to be located and bounded dynamically from a large and constantly changing number of service providers based on their Quality of Service (QoS). In order to enable quality-driven web service selection, we need an open, fair, dynamic and secure framework to evaluate the QoS of a vast number of web services. The fair computation and enforcing of QoS of web services should have minimal overhead but yet able to achieve sufficient trust by both service requesters and providers. In this paper, we presented our open, fair and dynamic QoS computation model for web services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning market place application.", "title": "" }, { "docid": "e40eb32613ed3077177d61ac14e82413", "text": "Preamble. Billions of people are using cell phone devices on the planet, essentially in poor posture. The purpose of this study is to assess the forces incrementally seen by the cervical spine as the head is tilted forward, into worsening posture. This data is also necessary for cervical spine surgeons to understand in the reconstruction of the neck.", "title": "" }, { "docid": "ac15d2b4d14873235fe6e4d2dfa84061", "text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.", "title": "" }, { "docid": "4062ef369dce8a6b010282fb362040c4", "text": "How people in the city perceive their surroundings depends on a variety of dynamic and static context factors such as road traffic, the feeling of safety, urban architecture, etc. Such subjective and context-dependent perceptions can trigger different emotions, which enable additional insights into the spatial and temporal configuration of urban structures. This paper presents the Urban Emotions concept that proposes a human-centred approach for extracting contextual emotional information from human and technical sensors. The methodology proposed in this paper consists of four steps: 1) detecting emotions using wristband sensors, 2) “ground-truthing” these measurements using a People as Sensors location-based service, 3) extracting emotion information from crowdsourced data like Twitter, and 4) correlating the measured and extracted emotions. Finally, the emotion information is mapped and fed back into urban planning for decision support and for evaluating ongoing planning processes.", "title": "" } ]
scidocsrr
5b96fcdf269af950900d3a8246473724
3D Point Cloud Learning for Large-scale Environment Analysis and Place Recognition
[ { "docid": "845ee0b77e30a01d87e836c6a84b7d66", "text": "This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.", "title": "" }, { "docid": "3da4bcf1e3bcb3c5feb27fd05e43da80", "text": "This paper introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the image. Each of these regions can be thought of as a texture element having a characteristic elliptic shape and a distinctive appearance pattern. This pattern is captured in an affine-invariant fashion via a process of shape normalization followed by the computation of two novel descriptors, the spin image and the RIFT descriptor. When affine invariance is not required, the original elliptical shape serves as an additional discriminative feature for texture recognition. The proposed approach is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.", "title": "" }, { "docid": "7a8fbfe463f6d5c61df7db1c1d2670c9", "text": "State-of-the-art autonomous driving systems rely heavily on detailed and highly accurate prior maps. However, outside of small urban areas, it is very challenging to build, store, and transmit detailed maps since the spatial scales are so large. Furthermore, maintaining detailed maps of large rural areas can be impracticable due to the rapid rate at which these environments can change. This is a significant limitation for the widespread applicability of autonomous driving technology, which has the potential for an incredibly positive societal impact. In this paper, we address the problem of autonomous navigation in rural environments through a novel mapless driving framework that combines sparse topological maps for global navigation with a sensor-based perception system for local navigation. First, a local navigation goal within the sensor view of the vehicle is chosen as a waypoint leading towards the global goal. Next, the local perception system generates a feasible trajectory in the vehicle frame to reach the waypoint while abiding by the rules of the road for the segment being traversed. These trajectories are updated to remain in the local frame using the vehicle's odometry and the associated uncertainty based on the least-squares residual and a recursive filtering approach, which allows the vehicle to navigate road networks reliably, and at high speed, without detailed prior maps. We demonstrate the performance of the system on a full-scale autonomous vehicle navigating in a challenging rural environment and benchmark the system on a large amount of collected data.", "title": "" }, { "docid": "348a5c33bde53e7f9a1593404c6589b4", "text": "Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.", "title": "" } ]
[ { "docid": "a854ee8cf82c4bd107e93ed0e70ee543", "text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.", "title": "" }, { "docid": "cc3d0d9676ad19f71b4a630148c4211f", "text": "OBJECTIVES\nPrevious studies have revealed that memory performance is diminished in chronic pain patients. Few studies, however, have assessed multiple components of memory in a single sample. It is currently also unknown whether attentional problems, which are commonly observed in chronic pain, mediate the decline in memory. Finally, previous studies have focused on middle-aged adults, and a possible detrimental effect of aging on memory performance in chronic pain patients has been commonly disregarded. This study, therefore, aimed at describing the pattern of semantic, working, and visual and verbal episodic memory performance in participants with chronic pain, while testing for possible contributions of attention and age to task performance.\n\n\nMETHODS\nThirty-four participants with chronic pain and 32 pain-free participants completed tests of episodic, semantic, and working memory to assess memory performance and a test of attention.\n\n\nRESULTS\nParticipants with chronic pain performed worse on tests of working memory and verbal episodic memory. A decline in attention explained some, but not all, group differences in memory performance. Finally, no additional effect of age on the diminished task performance in participants with chronic pain was observed.\n\n\nDISCUSSION\nTaken together, the results indicate that chronic pain significantly affects memory performance. Part of this effect may be caused by underlying attentional dysfunction, although this could not fully explain the observed memory decline. An increase in age in combination with the presence of chronic pain did not additionally affect memory performance.", "title": "" }, { "docid": "4e791e4367b5ef9ff4259a87b919cff7", "text": "Considerable attention has been paid to dating the earliest appearance of hominins outside Africa. The earliest skeletal and artefactual evidence for the genus Homo in Asia currently comes from Dmanisi, Georgia, and is dated to approximately 1.77–1.85 million years ago (Ma)1. Two incisors that may belong to Homo erectus come from Yuanmou, south China, and are dated to 1.7 Ma2; the next-oldest evidence is an H. erectus cranium from Lantian (Gongwangling)—which has recently been dated to 1.63 Ma3—and the earliest hominin fossils from the Sangiran dome in Java, which are dated to about 1.5–1.6 Ma4. Artefacts from Majuangou III5 and Shangshazui6 in the Nihewan basin, north China, have also been dated to 1.6–1.7 Ma. Here we report an Early Pleistocene and largely continuous artefact sequence from Shangchen, which is a newly discovered Palaeolithic locality of the southern Chinese Loess Plateau, near Gongwangling in Lantian county. The site contains 17 artefact layers that extend from palaeosol S15—dated to approximately 1.26 Ma—to loess L28, which we date to about 2.12 Ma. This discovery implies that hominins left Africa earlier than indicated by the evidence from Dmanisi. An Early Pleistocene artefact assemblage from the Chinese Loess Plateau indicates that hominins had left Africa by at least 2.1 million years ago, and occupied the Loess Plateau repeatedly for a long time.", "title": "" }, { "docid": "8ebab4a80cdff32082b86b7c698856f0", "text": "One aim of component-based software engineering (CBSE) is to enable the prediction of extra-functional properties, such as performance and reliability, utilising a well-defined composition theory. Nowadays, such theories and their accompanying prediction methods are still in a maturation stage. Several factors influencing extra-functional properties need additional research to be understood. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independently from their later context to enable reuse. Thus, extra-functional properties of components need to be specified in a parametric way to take different influencing factors like the hardware platform or the usage profile into account. Our approach uses the Palladio Component Model (PCM) to specify component-based software architectures in a parametric way. This model offers direct support of the CBSE development process by dividing the model creation among the developer roles. This paper presents our model and a simulation tool based on it, which is capable of making performance predictions. Within a case study, we show that the resulting prediction accuracy is sufficient to support the evaluation of architectural design decisions.", "title": "" }, { "docid": "f6342101ff8315bcaad4e4f965e6ba8a", "text": "In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1].", "title": "" }, { "docid": "24006b9eb670c84904b53320fbedd32c", "text": "Maturity Models have been introduced, over the last four decades, as guides and references for Information System management in organizations from different sectors of activity. In the healthcare field, Maturity Models have also been used to deal with the enormous complexity and demand of Hospital Information Systems. This article presents a research project that aimed to develop a new comprehensive model of maturity for a health area. HISMM (Hospital Information System Maturity Model) was developed to address a complexity of SIH and intends to offer a useful tool for the demanding role of its management. The HISMM has the peculiarity of congregating a set of key maturity Influence Factors and respective characteristics, enabling not only the assessment of the global maturity of a HIS but also the individual maturity of its different dimensions. In this article, we present the methodology for the development of Maturity Models adopted for the creation of HISMM and the underlying reasons for its choice.", "title": "" }, { "docid": "d3bdff7b747b5804971534cfbfd2ce53", "text": "The consequences of security problems are increasingly serious. These problems can now lead to personal injury, prolonged downtime and irreparable damage to capital goods. To achieve this, systems require end-to-end security solutions that cover the layers of connectivity, furthermore, guarantee the privatization and protection of data circulated via networks. In this paper, we will give a definition to the Internet of things, try to dissect its architecture (protocols, layers, entities …), thus giving a state of the art of security in the field of internet of things (Faults detected in each layer …), finally, mention the solutions proposed until now to help researchers start their researches on internet of things security subject.", "title": "" }, { "docid": "ee0819baef1a64702ef4b6b93564ed75", "text": "Solitary pigmented melanocytic intraoral lesions of the oral cavity are rare. Oral nevus is a congenital or acquired benign neoplasm. Oral compound nevus constitutes 5.9%-16.5% of all oral melanocytic nevi. The oral compound nevus is commonly seen on hard palate and buccal mucosa and rarely on other intraoral sites. The objective of this article is to present a rare case report of oral compound nevus in the retromolar pad region along with a review of literature. A 22 year old female reported with a solitary black pigmented papule at retromolar pad region which was surgically removed and microscopic investigation confirmed the diagnosis of oral compound nevus.", "title": "" }, { "docid": "8a77ab964896d3fea327e76b2efad8ef", "text": "We present the fundamental ideas underlying statistical hypothesis testing using the frequentist framework. We start with a simple example that builds up the one-sample t-test from the beginning, explaining important concepts such as the sampling distribution of the sample mean, and the iid assumption. Then we examine the meaning of the p-value in detail, and discuss several important misconceptions about what a p-value does and does not tell us. This leads to a discussion of Type I, II error and power, and Type S and M error. An important conclusion from this discussion is that one should aim to carry out appropriately powered studies. Next, we discuss two common issues we have encountered in psycholinguistics and linguistics: running experiments until significance is reached, and the “garden-of-forking-paths” problem discussed by Gelman and others. The best way to use frequentist methods is to run appropriately powered studies, check model assumptions, clearly separate exploratory data analysis from planned comparisons decided upon before the study was run, and always attempt to replicate results.", "title": "" }, { "docid": "db78855cfd464e54f6aafdce8b412a2f", "text": "Agent is not only the core concept of complexity theory, but the most elementary component in implementing knowledge management systems. This article, based on the theory of complexity and combined the obtained research results, discusses the definition, structure, composition of different agents. It also concerns the relationship among agents in knowledge management and the action mode of multiple agents.", "title": "" }, { "docid": "a064ad01edd6a369d939736e04831e50", "text": "Asthma is frequently undertreated, resulting in a relatively high prevalence of patients with uncontrolled disease, characterized by the presence of symptoms and risk of adverse outcomes. Patients with uncontrolled asthma have a higher risk of morbidity and mortality, underscoring the importance of identifying uncontrolled disease and modifying management plans to improve control. Several assessment tools exist to evaluate control with various cutoff points and measures, but these tools do not reliably correlate with physiological measures and should be considered a supplement to physiological tests. When attempting to improve control in patients, nonpharmacological interventions should always be attempted before changing or adding pharmacotherapies. Among patients with severe, uncontrolled asthma, individualized treatment based on asthma phenotype and eosinophil presence should be considered. The efficacy of the anti-IgE antibody omalizumab has been well established for patients with allergic asthma, and novel biologic agents targeting IL-5, IL-13, IL-4, and other allergic pathways have been investigated for patients with allergic or eosinophilic asthma. Fevipiprant (a CRTH2 [chemokine receptor homologous molecule expressed on Th2 cells] antagonist) and imatinib (a tyrosine kinase inhibition) are examples of nonbiologic therapies that may be useful for patients with severe, uncontrolled asthma. Incorporation of new and emerging treatment into therapeutic strategies for patients with severe asthma may improve outcomes for this patient population.", "title": "" }, { "docid": "28e0bd104c8654ed9ad007c66bae0461", "text": "Today, journalist, information analyst, and everyday news consumers are tasked with discerning and fact-checking the news. This task has became complex due to the ever-growing number of news sources and the mixed tactics of maliciously false sources. To mitigate these problems, we introduce the The News Landscape (NELA) Toolkit: an open source toolkit for the systematic exploration of the news landscape. NELA allows users to explore the credibility of news articles using well-studied content-based markers of reliability and bias, as well as, filter and sort through article predictions based on the users own needs. In addition, NELA allows users to visualize the media landscape at different time slices using a variety of features computed at the source level. NELA is built with a modular, pipeline design, to allow researchers to add new tools to the toolkit with ease. Our demo is an early transition of automated news credibility research to assist human fact-checking efforts and increase the understanding of the news ecosystem as a whole. To use this tool, go to http://nelatoolkit.science", "title": "" }, { "docid": "0e120a405e8538c8d46fe0a50463366f", "text": "Two studies were conducted to investigate the effects of red pepper (capsaicin) on feeding behaviour and energy intake. In the first study, the effects of dietary red pepper added to high-fat (HF) and high-carbohydrate (HC) meals on subsequent energy and macronutrient intakes were examined in thirteen Japanese female subjects. After the ingestion of a standardized dinner on the previous evening, the subjects ate an experimental breakfast (1883 kJ) of one of the following four types: (1) HF; (2) HF and red pepper (10 g); (3) HC; (4) HC and red pepper. Ad libitum energy and macronutrient intakes were measured at lunch-time. The HC breakfast significantly reduced the desire to eat and hunger after breakfast. The addition of red pepper to the HC breakfast also significantly decreased the desire to eat and hunger before lunch. Differences in diet composition at breakfast time did not affect energy and macronutrient intakes at lunch-time. However, the addition of red pepper to the breakfast significantly decreased protein and fat intakes at lunch-time. In Study 2, the effects of a red-pepper appetizer on subsequent energy and macronutrient intakes were examined in ten Caucasian male subjects. After ingesting a standardized breakfast, the subjects took an experimental appetizer (644 kJ) at lunch-time of one of the following two types: (1) mixed diet and appetizer; (2) mixed diet and red-pepper (6 g) appetizer. The addition of red pepper to the appetizer significantly reduced the cumulative ad libitum energy and carbohydrate intakes during the rest of the lunch and in the snack served several hours later. Moreover, the power spectral analysis of heart rate revealed that this effect of red pepper was associated with an increase in the ratio sympathetic: parasympathetic nervous system activity. These results indicate that the ingestion of red pepper decreases appetite and subsequent protein and fat intakes in Japanese females and energy intake in Caucasian males. Moreover, this effect might be related to an increase in sympathetic nervous system activity in Caucasian males.", "title": "" }, { "docid": "73242ddfc886fd767d6689d608918cad", "text": "The chemical reduction of graphene oxide (GO) typically involves highly toxic reducing agents that are harmful to human health and environment, and complicated surface modification is often needed to avoid aggregation of the reduced GO during reduction process. In this paper, a green and facile strategy is reported for the fabrication of soluble reduced GO. The proposed method is based on the reduction of exfoliated GO in green tea solution by making use of the reducing capability and the aromatic rings of tea polyphenol (TP) that contained in tea solution. The measurements of the resultant graphene confirm the efficient removal of the oxygen-containing groups in GO. The strong interactions between the reduced graphene and the aromatic TPs guarantee the good dispersion of the reduced graphene in both aqueous and a variety of organic solvents. These features endow this green approach with great potential in constructing of various graphene-based materials, especially for high-performance biorelated materials as demonstrated in this study of chitosan/graphene composites.", "title": "" }, { "docid": "d7eca0ca4da72bca2d74d484e4dec8ce", "text": "Recent studies have shown that the human genome has a haplotype block structure such that it can be divided into discrete blocks of limited haplotype diversity. Patil et al. [6] and Zhang et al. [12] developed algorithms to partition haplotypes into blocks with minimum number of tag SNPs for the entire chromosome. However, it is not clear how to partition haplotypes into blocks with restricted number of SNPs when only limited resources are available. In this paper, we first formulated this problem as finding a block partition with a fixed number of tag SNPs that can cover the maximal percentage of a genome. Then we solved it by two dynamic programming algorithms, which are fairly flexible to take into account the knowledge of functional polymorphism. We applied our algorithms to the published SNP data of human chromosome 21 combining with the functional information of these SNPs and demonstrated the effectiveness of them. Statistical investigation of the relationship between the starting points of a block partition and the coding and non-coding regions illuminated that the SNPs at these starting points are not significantly enriched in coding regions. We also developed an efficient algorithm to find all possible long local maximal haplotypes across a subset of samples. After applying this algorithm to the human chromosome 21 haplotype data, we found that samples with long local haplotypes are not necessarily globally similar.", "title": "" }, { "docid": "93c84b6abfe30ff7355e4efc310b440b", "text": "Parallel file systems (PFS) are widely-used in modern computing systems to mask the ever-increasing performance gap between computing and data access. PFSs favor large requests, and do not work well for small requests, especially small random requests. Newer Solid State Drives (SSD) have excellent performance on small random data accesses, but also incur a high monetary cost. In this study, we propose a hybrid architecture named the Smart Selective SSD Cache (S4D-Cache), which employs a small set of SSD-based file servers as a selective cache of conventional HDD-based file servers. A novel scheme is introduced to identify performance-critical data, and conduct selective cache admission to fully utilize the hybrid architecture in terms of data-access parallelism and randomness. We have implemented an S4D-Cache under the MPI-IO and PVFS2 parallel file system. Our experiments show that S4D-Cache can significantly improve I/O throughput, and is a promising approach for parallel applications.", "title": "" }, { "docid": "e061e276254cb541826a066dcaf7a460", "text": "Effective data visualization is a key part of the discovery process in the era of “big data”. It is the bridge between the quantitative content of the data and human intuition, and thus an essential component of the scientific path from data into knowledge and understanding. Visualization is also essential in the data mining process, directing the choice of the applicable algorithms, and in helping to identify and remove bad data from the analysis. However, a high complexity or a high dimensionality of modern data sets represents a critical obstacle. How do we visualize interesting structures and patterns that may exist in hyper-dimensional data spaces? A better understanding of how we can perceive and interact with multidimensional information poses some deep questions in the field of cognition technology and human-computer interaction. To this effect, we are exploring the use of immersive virtual reality platforms for scientific data visualization, both as software and inexpensive commodity hardware. These potentially powerful and innovative tools for multi-dimensional data visualization can also provide an easy and natural path to a collaborative data visualization and exploration, where scientists can interact with their data and their colleagues in the same visual space. Immersion provides benefits beyond the traditional “desktop” visualization tools: it leads to a demonstrably better perception of a datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data.", "title": "" }, { "docid": "9c800a53208bf1ded97e963ed4f80b28", "text": "We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.", "title": "" }, { "docid": "9f177381c2ba4c6c90faee339910c6c6", "text": "Behavior genetics has demonstrated that genetic variance is an important component of variation for all behavioral outcomes , but variation among families is not. These results have led some critics of behavior genetics to conclude that heritability is so ubiquitous as to have few consequences for scientific understanding of development , while some behavior genetic partisans have concluded that family environment is not an important cause of developmental outcomes. Both views are incorrect. Genotype is in fact a more systematic source of variability than environment, but for reasons that are methodological rather than substantive. Development is fundamentally nonlinear, interactive, and difficult to control experimentally. Twin studies offer a useful methodologi-cal shortcut, but do not show that genes are more fundamental than environments. The nature-nurture debate is over. The bottom line is that everything is heritable, an outcome that has taken all sides of the nature-nurture debate by surprise. Irving Gottesman and I have suggested that the universal influence of genes on behavior be enshrined as the first law of behavior genetics (Turkheimer & Gottesman, 1991), and at the risk of naming laws that I can take no credit for discovering, it is worth stating the nearly unanimous results of behavior genetics in a more formal manner. ● First Law. All human behavioral traits are heritable. ● Second Law. The effect of being raised in the same family is smaller than the effect of genes. ● Third Law. A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families. It is not my purpose in this brief article to defend these three laws against the many exceptions that might be claimed. The point is that now that the empirical facts are in and no longer a matter of serious controversy, it is time to turn attention to what the three laws mean, to the implications of the genetics of behavior for an understanding of complex human behavior and its development. VARIANCE AND CAUSATION IN BEHAVIORAL DEVELOPMENT If the first two laws are taken literally , they seem to herald a great victory for the nature side of the old debate: Genes matter, families do not. To understand why such views are at best an oversimplification of a complex reality, it is necessary to consider the newest wave of opposition that behavior genetics has generated. These new critics , whose most …", "title": "" } ]
scidocsrr
6f5877517b7edbe05f4ea44b40e058d7
Scale space texture analysis for face anti-spoofing
[ { "docid": "152e5d8979eb1187e98ecc0424bb1fde", "text": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model (DGPLVM), named GaussianFace, for face verification. In contrast to relying unrealistically on a single training data source, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. To enhance discriminative power, we introduced a more efficient equivalent form of Kernel Fisher Discriminant Analysis to DGPLVM. To speed up the process of inference and prediction, we exploited the low rank approximation method. Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.", "title": "" } ]
[ { "docid": "b6e6784d18c596565ca1e4d881398a0d", "text": "Uncovering lies (or deception) is of critical importance to many including law enforcement and security personnel. Though these people may try to use many different tactics to discover deception, previous research tells us that this cannot be accomplished successfully without aid. This manuscript reports on the promising results of a research study where data and text mining methods along with a sample of real-world data from a high-stakes situation is used to detect deception. At the end, the information fusion based classification models produced better than 74% classification accuracy on the holdout sample using a 10-fold cross validation methodology. Nonetheless, artificial neural networks and decision trees produced accuracy rates of 73.46% and 71.60% respectively. However, due to the high stakes associated with these types of decisions, the extra effort of combining the models to achieve higher accuracy", "title": "" }, { "docid": "13630f611d3390b91b29ded67d4c81b1", "text": "With better natural language semantic representations, computers can do more applications more efficiently as a result of better understanding of natural text. However, no single semantic representation at this time fulfills all requirements needed for a satisfactory representation. Logic-based representations like first-order logic capture many of the linguistic phenomena using logical constructs, and they come with standardized inference mechanisms, but standard first-order logic fails to capture the “graded” aspect of meaning in languages. Distributional models use contextual similarity to predict the “graded” semantic similarity of words and phrases but they do not adequately capture logical structure. In addition, there are a few recent attempts to combine both representations either on the logic side (still, not a graded representation), or in the distribution side(not full logic). We propose using probabilistic logic to represent natural language semantics combining the expressivity and the automated inference of logic, and the gradedness of distributional representations. We evaluate this semantic representation on two tasks, Recognizing Textual Entailment (RTE) and Semantic Textual Similarity (STS). Doing RTE and STS better is an indication of a better semantic understanding. Our system has three main components, 1. Parsing and Task Representation, 2. Knowledge Base Construction, and 3. Inference. The input natural sentences of the RTE/STS task are mapped to logical form using Boxer which is a rule based system built on top of a CCG parser, then they are used to formulate the RTE/STS problem in probabilistic logic. Then, a knowledge base is represented as weighted inference rules collected from different sources like WordNet and on-the-fly lexical rules from distributional semantics. An advantage of using probabilistic logic is that more rules can be added from more resources easily by mapping them to logical rules and weighting them appropriately. The last component is the inference, where we solve the probabilistic logic inference problem using an appropriate probabilistic logic tool like Markov Logic Network (MLN), or Probabilistic Soft Logic (PSL). We show how to solve the inference problems in MLNs efficiently for RTE using a modified closed-world assumption and a new inference algorithm, and how to adapt MLNs and PSL for STS by relaxing conjunctions. Experiments show that our semantic representation can handle RTE and STS reasonably well. For the future work, our short-term goals are 1. better RTE task representation and finite domain handling, 2. adding more inference rules, precompiled and on-the-fly, 3. generalizing the modified closed– world assumption, 4. enhancing our inference algorithm for MLNs, and 5. adding a weight learning step to better adapt the weights. On the longer-term, we would like to apply our semantic representation to the question answering task, support generalized quantifiers, contextualize WordNet rules we use, apply our semantic representation to languages other than English, and implement a probabilistic logic Inference Inspector that can visualize the proof structure. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE OCT 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Natural Language Semantics using Probabilistic Logic 5a. CONTRACT NUMBER", "title": "" }, { "docid": "666919bbe7e63d99e3314657aecb3e02", "text": " Real-time Upper-body Human Pose Estimation using a Depth Camera Himanshu Prakash Jain, Anbumani Subramanian HP Laboratories HPL-2010-190 Haar cascade based detection, template matching, weighted distance transform and pose estimation Automatic detection and pose estimation of humans is an important task in HumanComputer Interaction (HCI), user interaction and event analysis. This paper presents a model based approach for detecting and estimating human pose by fusing depth and RGB color data from monocular view. The proposed system uses Haar cascade based detection and template matching to perform tracking of the most reliably detectable parts namely, head and torso. A stick figure model is used to represent the detected body parts. Then, the fitting is performed independently for each limb, using the weighted distance transform map. The fact that each limb is fitted independently speeds-up the fitting process and makes it robust, avoiding the combinatorial complexity problems that are common with these types of methods. The output is a stick figure model consistent with the pose of the person in the given input image. The algorithm works in real-time and is fully automatic and can detect multiple non-intersecting people. External Posting Date: November 21, 2010 [Fulltext] Approved for External Publication Internal Posting Date: November 21, 2010 [Fulltext] Copyright 2010 Hewlett-Packard Development Company, L.P.", "title": "" }, { "docid": "10733e267b9959ef57aac7cc18eee5d6", "text": "In this paper, we explore how strongly author name disambiguation (AND) affects the results of an author-based citation analysis study, and identify conditions under which the commonly used simplified approach of using surnames and first initials may suffice in practice. We compare author citation ranking and co-citation mapping results in the stem cell research field 2004-2009 between two AND approaches: the traditional simplified approach of using author surnames and first initials, and a sophisticated algorithmic approach. We find that the traditional approach leads to extremely distorted rankings and substantially distorted mappings of authors in this field when based on firstor all-author citation counting, whereas last-author based citation ranking and co-citation mapping both appear relatively immune to the author name ambiguity problem. This is largely because romanized names of Chinese and Korean authors, who are very active in this field, are extremely ambiguous, but few of these researchers consistently publish as last authors in by-lines. We conclude that more earnest effort is required to deal with the author name ambiguity problem in both citation analysis and information retrieval, especially given the current trend towards globalization. In the stem cell field, where lab heads are traditionally listed as last authors in by-lines, last-author based citation ranking and co-citation mapping using the traditional simple approach to author name disambiguation may serve as a simple workaround, but likely at the price of largely filtering out Chinese and Korean contributions to the field as well as important contributions by young researchers.", "title": "" }, { "docid": "b0356ab3a4a3917386bfe928a68031f5", "text": "Even when Ss fail to recall a solicited target, they can provide feeling-of-knowing (FOK) judgments about its availability in memory. Most previous studies addressed the question of FOK accuracy, only a few examined how FOK itself is determined, and none asked how the processes assumed to underlie FOK also account for its accuracy. The present work examined all 3 questions within a unified model, with the aim of demystifying the FOK phenomenon. The model postulates that the computation of FOK is parasitic on the processes involved in attempting to retrieve the target, relying on the accessibility of pertinent information. It specifies the links between memory strength, accessibility of correct and incorrect information about the target, FOK judgments, and recognition memory. Evidence from 3 experiments is presented. The results challenge the view that FOK is based on a direct, privileged access to an internal monitor.", "title": "" }, { "docid": "15731cee350b1934f2e9ef9fd218a478", "text": "In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster in terms of computation time. Prediction performance is worse than the state-of-the-art Weisfeiler Lehman RDF kernel, but our kernel is a factor 10 faster to compute. Thus, we consider this kernel a very suitable baseline for learning from RDF data. Furthermore, we extend this kernel to handle RDF literals as bag-ofwords feature vectors, which increases performance in two of the four experiments.", "title": "" }, { "docid": "e2a01b8e1c7bc57a596219d2ea5364b7", "text": "Most biometric cryptosystems that have been proposed to protect fingerprint minutiae make use of public alignment helper data. This, however, has the inadvertent effect of information leakage about the protected templates. A countermeasure to avoid auxiliary alignment data is to protect absolutely pre-aligned fingerprints. As a proof of concept, we run performance evaluations of a minutiae fuzzy vault with an automatic method for absolute pre-alignment. Therefore, we propose a new method for estimating a fingerprint's directed reference point by modeling the local orientation around the core as a tented arch.", "title": "" }, { "docid": "427d0d445985ac4eb31c7adbaf6f1e22", "text": "In this work, we jointly address the problem of text detection and recognition in natural scene images based on convolutional recurrent neural networks. We propose a unified network that simultaneously localizes and recognizes text with a single forward pass, avoiding intermediate processes, such as image cropping, feature re-calculation, word separation, and character grouping. In contrast to existing approaches that consider text detection and recognition as two distinct tasks and tackle them one by one, the proposed framework settles these two tasks concurrently. The whole framework can be trained end-to-end, requiring only images, ground-truth bounding boxes and text labels. The convolutional features are calculated only once and shared by both detection and recognition, which saves processing time. Through multi-task training, the learned features become more informative and improves the overall performance. Our proposed method has achieved competitive performance on several benchmark datasets.", "title": "" }, { "docid": "4b0b2c7168fa04543d77bee46af14b0a", "text": "Individuals face privacy risks when providing personal location data to potentially untrusted location based services (LBSs). We develop and demonstrate CacheCloak, a system that enables realtime anonymization of location data. In CacheCloak, a trusted anonymizing server generates mobility predictions from historical data and submits intersecting predicted paths simultaneously to the LBS. Each new predicted path is made to intersect with other users' paths, ensuring that no individual user's path can be reliably tracked over time. Mobile users retrieve cached query responses for successive new locations from the trusted server, triggering new prediction only when no cached response is available for their current locations. A simulated hostile LBS with detailed mobility pattern data attempts to track users of CacheCloak, generating a quantitative measure of location privacy over time. GPS data from a GIS-based traffic simulation in an urban environment shows that CacheCloak can achieve realtime location privacy without loss of location accuracy or availability.", "title": "" }, { "docid": "5fa0bc1f4a7f9573e90790d751bbfc6d", "text": "The online shopping is increasingly being accepted Internet users, which reflects the online shopping convenient, fast, efficient and economic advantage. Online shopping, personal information security is a major problem in the Internet. This article summarizes the characteristics of online shopping and the current development of the main safety problems, and make online shopping related security measures and transactions.", "title": "" }, { "docid": "dd1f7671025d79dead0a87fef6cec409", "text": "PURPOSE This article summarizes prior work in the learning sciences and discusses one perspective—situative learning—in depth. Situativity refers to the central role of context, including the physical and social aspects of the environment, on learning. Furthermore, it emphasizes the socially and culturally negotiated nature of thought and action of persons in interaction. The aim of the article is to provide a foundation for future work on engineering learning and to suggest ways in which the learning sciences and engineering education research communities might work to their mutual benefit.", "title": "" }, { "docid": "e45fff410b042234cc6fda764a982532", "text": "The fisheye camera has been widely studied in the field of robot vision since it can capture a wide view of the scene at one time. However, serious image distortion handers it from being widely used. To remedy this, this paper proposes an improved fisheye lens calibration and distortion correction method. First, an improved automatic detection of checkerboards is presented to avoid the original constraint and user intervention that usually existed in the conventional methods. A state-of-the-art corner detection method is evaluated and its strengths and shortcomings are analyzed. An adaptively automatic corner detection algorithm is implemented to overcome the shortcomings. Then, a precise mathematical model based on the law of fisheye lens imaging is modeled, which assumes that the imaging function can be described by a Taylor series expansion, followed by a nonlinear refinement based on the maximum likelihood criterion. With the proposed corner detection and mathematical model of fisheye imaging, both intrinsic and external parameters of the fisheye camera can be correctly calibrated. Finally, the radial distortion of the fisheye image can be corrected by incorporating the calibrated parameters. Experimental results validate the effectiveness of the proposed method.", "title": "" }, { "docid": "baa70e5df451e8bc7354fcf00349f53b", "text": "This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions. 2010 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2f20e5792104b67143b7dcc43954317e", "text": "Resource Description Framework (RDF) was designed with the initial goal of developing metadata for the Internet. While the Internet is a conglomeration of many interconnected networks and computers, most of today's best RDF storage solutions are confined to a single node. Working on a single node has significant scalability issues, especially considering the magnitude of modern day data. In this paper we introduce a scalable RDF data management system that uses Accumulo, a Google Bigtable variant. We introduce storage methods, indexing schemes, and query processing techniques that scale to billions of triples across multiple nodes, while providing fast and easy access to the data through conventional query mechanisms such as SPARQL. Our performance evaluation shows that in most cases, our system outperforms existing distributed RDF solutions, even systems much more complex than ours.", "title": "" }, { "docid": "c82c28a44adb4a67e44e1d680b1d13ad", "text": "Cipherbase is a comprehensive database system that provides strong end-to-end data confidentiality through encryption. Cipherbase is based on a novel architecture that combines an industrial strength database engine (SQL Server) with lightweight processing over encrypted data that is performed in secure hardware. The overall architecture provides significant benefits over the state-of-the-art in terms of security, performance, and functionality. This paper presents a prototype of Cipherbase that uses FPGAs to provide secure processing and describes the system engineering details implemented to achieve competitive performance for transactional workloads. This includes hardware-software co-design issues (e.g. how to best offer parallelism), optimizations to hide the latency between the secure hardware and the main system, and techniques to cope with space inefficiencies. All these optimizations were carefully designed not to affect end-to-end data confidentiality. Our experiments with the TPC-C benchmark show that in the worst case when all data are strongly encrypted, Cipherbase achieves 40% of the throughput of plaintext SQL Server. In more realistic cases, if only critical data such as customer names are encrypted, the Cipherbase throughput is more than 90% of plaintext SQL Server.", "title": "" }, { "docid": "9e0267f10a27509ae735b1ade704e461", "text": "Recent advances in software testing allow automatic derivation of tests that reach almost any desired point in the source code. There is, however, a fundamental problem with the general idea of targeting one distinct test coverage goal at a time: Coverage goals are neither independent of each other, nor is test generation for any particular coverage goal guaranteed to succeed. We present EvoSuite, a search-based approach that optimizes whole test suites towards satisfying a coverage criterion, rather than generating distinct test cases directed towards distinct coverage goals. Evaluated on five open source libraries and an industrial case study, we show that EvoSuite achieves up to 18 times the coverage of a traditional approach targeting single branches, with up to 44% smaller test suites.", "title": "" }, { "docid": "4bc73a7e6a6975ba77349cac62a96c18", "text": "BACKGROUND\nIn May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear.\n\n\nMETHODS\nThe study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play.\n\n\nRESULTS\nOn average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found.\n\n\nCONCLUSIONS\nThe present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.", "title": "" }, { "docid": "8451a1080a9beaa07ab4d5072eb2f647", "text": "Track cycling events range from a 200 m flying sprint (lasting 10 to 11 seconds) to the 50 km points race (lasting approximately 1 hour). Unlike road cycling competitions where most racing is undertaken at submaximal power outputs, the shorter track events require the cyclist to tax maximally both the aerobic and anaerobic (oxygen independent) metabolic pathways. Elite track cyclists possess key physical and physiological attributes which are matched to the specific requirements of their events: these cyclists must have the appropriate genetic predisposition which is then maximised through effective training interventions. With advances in technology it is now possible to accurately measure both power supply and demand variables under competitive conditions. This information provides better resolution of factors that are important for training programme design and skill development.", "title": "" }, { "docid": "e1dd5419e848b8448780afee102b65e1", "text": "Wireless local area networks (WLANs) have become a promising choice for indoor positioning as the only existing and established infrastructure, to localize the mobile and stationary users indoors. However, since WLANs have been initially designed for wireless networking and not positioning, the localization task based on WLAN signals has several challenges. Amongst the WLAN positioning methods, WLAN fingerprinting localization has recently garnered great attention due to its promising performance. Notwithstanding, WLAN fingerprinting faces several challenges and hence, in this paper, our goal is to overview these challenges and corresponding state-of-the-art solutions. This paper consists of three main parts: 1) conventional localization schemes; 2) state-of-the-art approaches; and 3) practical deployment challenges. Since all proposed methods in the WLAN literature have been conducted and tested in different settings, the reported results are not readily comparable. So, we compare some of the representative localization schemes in a single real environment and assess their localization accuracy, positioning error statistics, and complexity. Our results depict illustrative evaluation of the approaches in the literature and guide to future improvement opportunities.", "title": "" }, { "docid": "0ff3ccdf834b8264cada634049389c9c", "text": "Many applications today need to manage large data sets with uncertainties. In this paper we describe the foundations of managing data where the uncertainties are quantified as probabilities. We review the basic definitions of the probabilistic data model, present some fundamental theoretical result for query evaluation on probabilistic databases, and discuss several challenges, open problems, and research directions.", "title": "" } ]
scidocsrr
7379816680472df3d7c1a11f1a457df2
Artistic minimal rendering with lines and blocks
[ { "docid": "cfe31ce3a6a23d9148709de6032bd90b", "text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.", "title": "" } ]
[ { "docid": "d23649c81665bc76134c09b7d84382d0", "text": "This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities. More specifically, for WSNs that comprise a large number of statically placed sensor nodes transmitting data to a collection point (the sink), we show that by controlling the sink movements we can obtain remarkable lifetime improvements. In order to determine sink movements, we first define a Mixed Integer Linear Programming (MILP) analytical model whose solution determines those sink routes that maximize network lifetime. Our contribution expands further by defining the first heuristics for controlled sink movements that are fully distributed and localized. Our Greedy Maximum Residual Energy (GMRE) heuristic moves the sink from its current location to a new site as if drawn toward the area where nodes have the highest residual energy. We also introduce a simple distributed mobility scheme (Random Movement or S. Basagni ( ) Department of Electrical and Computer Engineering, Northeastern University e-mail: basagni@ece.neu.edu A. Carosi · C. Petrioli Dipartimento di Informatica, Università di Roma “La Sapienza” e-mail: carosi@di.uniroma1.it C. Petrioli e-mail: petrioli@di.uniroma1.it E. Melachrinoudis · Z. M. Wang Department of Mechanical and Industrial Engineering, Northeastern University e-mail: emelas@coe.neu.edu Z. M. Wang e-mail: zmwang@coe.neu.edu RM) according to which the sink moves uncontrolled and randomly throughout the network. The different mobility schemes are compared through extensive ns2-based simulations in networks with different nodes deployment, data routing protocols, and constraints on the sink movements. In all considered scenarios, we observe that moving the sink always increases network lifetime. In particular, our experiments show that controlling the mobility of the sink leads to remarkable improvements, which are as high as sixfold compared to having the sink statically (and optimally) placed, and as high as twofold compared to uncontrolled mobility.", "title": "" }, { "docid": "f474fd0bce5fa65e79ceb77a17ace260", "text": "One popular approach to controlling humanoid robots is through inverse kinematics (IK) with stiff joint position tracking. On the other hand, inverse dynamics (ID) based approaches have gained increasing acceptance by providing compliant motions and robustness to external perturbations. However, the performance of such methods is heavily dependent on high quality dynamic models, which are often very difficult to produce for a physical robot. IK approaches only require kinematic models, which are much easier to generate in practice. In this paper, we supplement our previous work with ID-based controllers by adding IK, which helps compensate for modeling errors. The proposed full body controller is applied to three tasks in the DARPA Robotics Challenge (DRC) Trials in Dec. 2013.", "title": "" }, { "docid": "9093cff51237b4c601f604ad6df85aec", "text": "Motivation\nReconstructing the full-length expressed transcripts ( a.k.a. the transcript assembly problem) from the short sequencing reads produced by RNA-seq protocol plays a central role in identifying novel genes and transcripts as well as in studying gene expressions and gene functions. A crucial step in transcript assembly is to accurately determine the splicing junctions and boundaries of the expressed transcripts from the reads alignment. In contrast to the splicing junctions that can be efficiently detected from spliced reads, the problem of identifying boundaries remains open and challenging, due to the fact that the signal related to boundaries is noisy and weak.\n\n\nResults\nWe present DeepBound, an effective approach to identify boundaries of expressed transcripts from RNA-seq reads alignment. In its core DeepBound employs deep convolutional neural fields to learn the hidden distributions and patterns of boundaries. To accurately model the transition probabilities and to solve the label-imbalance problem, we novelly incorporate the AUC (area under the curve) score into the optimizing objective function. To address the issue that deep probabilistic graphical models requires large number of labeled training samples, we propose to use simulated RNA-seq datasets to train our model. Through extensive experimental studies on both simulation datasets of two species and biological datasets, we show that DeepBound consistently and significantly outperforms the two existing methods.\n\n\nAvailability and implementation\nDeepBound is freely available at https://github.com/realbigws/DeepBound .\n\n\nContact\nmingfu.shao@cs.cmu.edu or realbigws@gmail.com.", "title": "" }, { "docid": "771611dc99e22b054b936fce49aea7fc", "text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.", "title": "" }, { "docid": "5cd3809ab7ed083de14bb622f12373fe", "text": "The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.", "title": "" }, { "docid": "0060fbebb60c7f67d8750826262d7135", "text": "This paper introduces a web image search reranking approach that explores multiple modalities in a graph-based learning scheme. Different from the conventional methods that usually adopt a single modality or integrate multiple modalities into a long feature vector, our approach can effectively integrate the learning of relevance scores, weights of modalities, and the distance metric and its scaling for each modality into a unified scheme. In this way, the effects of different modalities can be adaptively modulated and better reranking performance can be achieved. We conduct experiments on a large dataset that contains more than 1000 queries and 1 million images to evaluate our approach. Experimental results demonstrate that the proposed reranking approach is more robust than using each individual modality, and it also performs better than many existing methods.", "title": "" }, { "docid": "6a2b3389ad8de2a0e9a50d4324869c2a", "text": "Many web applications provide a fully automatic machine translation service, and users can easily access and understand the information they are interested in. However, the services still have inaccurate results when translating technical terms. Therefore, we suggest a new method that collects reliable translations of technical terms between Korean and English. To collect the pairs, we utilize the metadata of Korean scientific papers and make a new statistical model to adapt the metadata characteristics appropriately. The collected Korean-English pairs are evaluated in terms of reliability and compared with the results of Google translator. Through evaluation and comparison, we confirm that this research can produce highly reliable data and improve the translation quality of technical terms.", "title": "" }, { "docid": "06cc255e124702878e2106bf0e8eb47c", "text": "Agent technology has been recognized as a promising paradigm for next generation manufacturing systems. Researchers have attempted to apply agent technology to manufacturing enterprise integration, enterprise collaboration (including supply chain management and virtual enterprises), manufacturing process planning and scheduling, shop floor control, and to holonic manufacturing as an implementation methodology. This paper provides an update review on the recent achievements in these areas, and discusses some key issues in implementing agent-based manufacturing systems such as agent encapsulation, agent organization, agent coordination and negotiation, system dynamics, learning, optimization, security and privacy, tools and standards. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6b7f2b7e528ee530822ff5bbb371645d", "text": "Automatically generating video captions with natural language remains a challenge for both the field of nature language processing and computer vision. Recurrent Neural Networks (RNNs), which models sequence dynamics, has proved to be effective in visual interpretation. Based on a recent sequence to sequence model for video captioning, which is designed to learn the temporal structure of the sequence of frames and the sequence model of the generated sentences with RNNs, we investigate how pretrained language model and attentional mechanism can aid the generation of natural language descriptions of videos. We evaluate our improvements on the Microsoft Video Description Corpus (MSVD) dataset, which is a standard dataset for this task. The results demonstrate that our approach outperforms original sequence to sequence model and achieves state-of-art baselines. We further run our model one a much harder Montreal Video Annotation Dataset (M-VAD), where the model also shows promising results.", "title": "" }, { "docid": "7b7c418cefcd571b03e5c0a002a5e923", "text": "A loop antenna having a gap has been investigated in the presence of a ground plane. The antenna configuration is optimized for the CP radiation, using the method of moments. It is found that, as the loop height above the ground plane is reduced, the optimized gap width approaches zero. Further antenna height reduction is found to be possible for an antenna whose wire radius is increased. On the basis of these results, we design an open-loop array antenna using a microstrip comb line as the feed network. It is demonstrated that an array antenna composed of eight open loop elements can radiate a CP wave with an axial ratio of 0.1 dB. The bandwidth for a 3-dB axial-ratio criterion is 4%, where the gain is almost constant at 15 dBi.", "title": "" }, { "docid": "9627fdd88378559f0e2704bd6fef36e7", "text": "Traditionally, a full-mouth rehabilitation based on full-crown coverage has been the recommended treatment for patients affected by severe dental erosion. Nowadays, thanks to improved adhesive techniques, the indications for crowns have decreased and a more conservative approach may be proposed. Even though adhesive treatments simplify both the clinical and laboratory procedures, restoring such patients still remains a challenge due to the great amount of tooth destruction. To facilitate the clinician's task during the planning and execution of a full-mouth adhesive rehabilitation, an innovative concept has been developed: the three-step technique. Three laboratory steps are alternated with three clinical steps, allowing the clinician and the laboratory technician to constantly interact to achieve the most predictable esthetic and functional outcome. During the first step, an esthetic evaluation is performed to establish the position of the plane of occlusion. In the second step, the patient's posterior quadrants are restored at an increased vertical dimension. Finally, the third step reestablishes the anterior guidance. Using the three-step technique, the clinician can transform a full-mouth rehabilitation into a rehabilitation for individual quadrants. This article illustrates only the first step in detail, explaining all the clinical parameters that should be analyzed before initiating treatment.", "title": "" }, { "docid": "4d2a87405ed84e8108cd20c855918102", "text": "When testing software artifacts that have several dependencies, one has the possibility of either instantiating these dependencies or using mock objects to simulate the dependencies’ expected behavior. Even though recent quantitative studies showed that mock objects are widely used both in open source and proprietary projects, scientific knowledge is still lacking on how and why practitioners use mocks. An empirical understanding of the situations where developers have (and have not) been applying mocks, as well as the impact of such decisions in terms of coupling and software evolution can be used to help practitioners adapt and improve their future usage. To this aim, we study the usage of mock objects in three OSS projects and one industrial system. More specifically, we manually analyze more than 2,000 mock usages. We then discuss our findings with developers from these systems, and identify practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. Finally, we manually analyze how the usage of mock objects in test code evolve over time as well as the impact of their usage on the coupling between test and production code. Our study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. Developers report to frequently mock dependencies that make testing difficult (e.g., infrastructure-related dependencies) and to not mock classes that encapsulate domain concepts/rules of the system. Among the key challenges, developers report that maintaining the behavior of the mock compatible with the behavior of original class is hard and that mocking increases the coupling between the test and the production code. Their perceptions are confirmed by our data, as we observed that mocks mostly exist since the very first version of the test class, and that they tend to stay there for its whole lifetime, and that changes in production code often force the test code to also change.", "title": "" }, { "docid": "e2060b183968f81342df4f636a141a3b", "text": "This paper presents automatic parallel parking for a passenger vehicle, with highlights on a path-planning method and on experimental results. The path-planning method consists of two parts. First, the kinematic model of the vehicle, with corresponding geometry, is used to create a path to park the vehicle in one or more maneuvers if the spot is very narrow. This path is constituted of circle arcs. Second, this path is transformed into a continuous-curvature path using clothoid curves. To execute the generated path, control inputs for steering angle and longitudinal velocity depending on the traveled distance are generated. Therefore, the traveled distance and the vehicle pose during a parking maneuver are estimated. Finally, the parking performance is tested on a prototype vehicle.", "title": "" }, { "docid": "3f2aa3cde019d56240efba61d52592a4", "text": "Drivers like global competition, advances in technology, and new attractive market opportunities foster a process of servitization and thus the search for innovative service business models. To facilitate this process, different methods and tools for the development of new business models have emerged. Nevertheless, business model approaches are missing that enable the representation of cocreation as one of the most important service-characteristics. Rooted in a cumulative research design that seeks to advance extant business model representations, this goal is to be closed by the Service Business Model Canvas (SBMC). This contribution comprises the application of thinking-aloud protocols for the formative evaluation of the SBMC. With help of industry experts and academics with experience in the service sector and business models, the usability is tested and implications for its further development derived. Furthermore, this study provides empirically based insights for the design of service business model representation that can facilitate the development of future business models.", "title": "" }, { "docid": "736bf637db43f67775c8e7b934f12602", "text": "With the fast growing interest in deep learning, various applications and machine learning tasks are emerged in recent years. Video captioning is especially gaining a lot of attention from both computer vision and natural language processing fields. Generating captions is usually performed by jointly learning of different types of data modalities that share common themes in the video. Learning with the joining representations of different modalities is very challenging due to the inherent heterogeneity resided in the mixed information of visual scenes, speech dialogs, music and sounds, and etc. Consequently, it is hard to evaluate the quality of video captioning results. In this paper, we introduce well-known metrics and datasets for evaluation of video captioning. We compare the the existing metrics and datasets to derive a new research proposal for the evaluation of video descriptions.", "title": "" }, { "docid": "1e5956b0d9d053cd20aad8b53730c969", "text": "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.", "title": "" }, { "docid": "acb41ecca590ed8bc53b7af46a280daf", "text": "We consider the problem of state estimation for a dynamic system driven by unobserved, correlated inputs. We model these inputs via an uncertain set of temporally correlated dynamic models, where this uncertainty includes the number of modes, their associated statistics, and the rate of mode transitions. The dynamic system is formulated via two interacting graphs: a hidden Markov model (HMM) and a linear-Gaussian state space model. The HMM's state space indexes system modes, while its outputs are the unobserved inputs to the linear dynamical system. This Markovian structure accounts for temporal persistence of input regimes, but avoids rigid assumptions about their detailed dynamics. Via a hierarchical Dirichlet process (HDP) prior, the complexity of our infinite state space robustly adapts to new observations. We present a learning algorithm and computational results that demonstrate the utility of the HDP for tracking, and show that it efficiently learns typical dynamics from noisy data.", "title": "" }, { "docid": "6b04721c0fc7135ddd0fdf76a9cfdd79", "text": "Functional magnetic resonance imaging (fMRI) was used to compare brain activity during the retrieval of coarse- and fine-grained spatial details and episodic details associated with a familiar environment. Long-time Toronto residents compared pairs of landmarks based on their absolute geographic locations (requiring either coarse or fine discriminations) or based on previous visits to those landmarks (requiring episodic details). An ROI analysis of the hippocampus showed that all three conditions activated the hippocampus bilaterally. Fine-grained spatial judgments recruited an additional region of the right posterior hippocampus, while episodic judgments recruited an additional region of the right anterior hippocampus, and a more extensive region along the length of the left hippocampus. To examine whole-brain patterns of activity, Partial Least Squares (PLS) analysis was used to identify sets of brain regions whose activity covaried with the three conditions. All three comparison judgments recruited the default mode network including the posterior cingulate/retrosplenial cortex, middle frontal gyrus, hippocampus, and precuneus. Fine-grained spatial judgments also recruited additional regions of the precuneus, parahippocampal cortex and the supramarginal gyrus. Episodic judgments recruited the posterior cingulate and medial frontal lobes as well as the angular gyrus. These results are discussed in terms of their implications for theories of hippocampal function and spatial and episodic memory.", "title": "" }, { "docid": "5fe036906302ab4131c7f9afc662df3f", "text": "Plant peptide hormones play an important role in regulating plant developmental programs via cell-to-cell communication in a non-cell autonomous manner. To characterize the biological relevance of C-TERMINALLY ENCODED PEPTIDE (CEP) genes in rice, we performed a genome-wide search against public databases using a bioinformatics approach and identified six additional CEP members. Expression analysis revealed a spatial-temporal pattern of OsCEP6.1 gene in different tissues and at different developmental stages of panicle. Interestingly, the expression level of the OsCEP6.1 was also significantly up-regulated by exogenous cytokinin. Application of a chemically synthesized 15-amino acid OsCEP6.1 peptide showed that OsCEP6.1 had a negative role in regulating root and seedling growth, which was further confirmed by transgenic lines. Furthermore, the constitutive expression of OsCEP6.1 was sufficient to lead to panicle architecture and grain size variations. Scanning electron microscopy analysis revealed that the phenotypic variation of OsCEP6.1 overexpression lines resulted from decreased cell size but not reduced cell number. Moreover, starch accumulation was not significantly affected. Taken together, these data suggest that the OsCEP6.1 peptide might be involved in regulating the development of panicles and grains in rice.", "title": "" }, { "docid": "010926d088cf32ba3fafd8b4c4c0dedf", "text": "The number and the size of spatial databases, e.g. for geomarketing, traffic control or environmental studies, are rapidly growing which results in an increasing need for spatial data mining. In this paper, we present new algorithms for spatial characterization and spatial trend analysis. For spatial characterization it is important that class membership of a database object is not only determined by its non-spatial attributes but also by the attributes of objects in its neighborhood. In spatial trend analysis, patterns of change of some non-spatial attributes in the neighborhood of a database object are determined. We present several algorithms for these tasks. These algorithms were implemented within a general framework for spatial data mining providing a small set of database primitives on top of a commercial spatial database management system. A performance evaluation using a real geographic database demonstrates the effectiveness of the proposed algorithms. Furthermore, we show how the algorithms can be combined to discover even more interesting spatial knowledge.", "title": "" } ]
scidocsrr
e644d6bd9fe2152c7bfef76f6728b2c6
Examining playfulness in adults : Testing its correlates with personality , positive psychological functioning , goal aspirations , and multi-methodically assessed ingenuity
[ { "docid": "059aed9f2250d422d76f3e24fd62bed8", "text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms", "title": "" }, { "docid": "8c4d4567cf772a76e99aa56032f7e99e", "text": "This paper discusses current perspectives on play and leisure and proposes that if play and leisure are to be accepted as viable occupations, then (a) valid and reliable measures of play must be developed, (b) interventions must be examined for inclusion of the elements of play, and (c) the promotion of play and leisure must be an explicit goal of occupational therapy intervention. Existing tools used by occupational therapists to assess clients' play and leisure are evaluated for the aspects of play and leisure they address and the aspects they fail to address. An argument is presented for the need for an assessment of playfulness, rather than of play or leisure activities. A preliminary model for the development of such an assessment is proposed.", "title": "" } ]
[ { "docid": "51f63ccb338706e59b81cb3dfd36cfc6", "text": "As the first decentralized cryptocurrency, Bitcoin [1] has ignited much excitement, not only for its novel realization of a central bank-free financial instrument, but also as an alternative approach to classical distributed computing problems, such as reaching agreement distributedly in the presence of misbehaving parties, as well as to numerous other applications-contracts, reputation systems, name services, etc. The soundness and security of these applications, however, hinges on the thorough understanding of the fundamental properties of its underlying blockchain data structure, which parties (\"miners\") maintain and try to extend by generating \"proofs of work\" (POW, aka \"cryptographic puzzle\"). In this talk we follow the approach introduced in [2], formulating such fundamental properties of the blockchain, and then showing how applications such as consensus and a robust public transaction ledger can be built ``on top'' of them. The properties are as follows, assuming the adversary's hashing power (our analysis holds against arbitrary attacks) is strictly less than ½ and high network synchrony:\n Common prefix: The blockchains maintained by the honest parties possess a large common prefix. More specifically, if two honest parties \"prune\" (i.e., cut off) k blocks from the end of their local chains, the probability that the resulting pruned chains will not be mutual prefixes of each other drops exponentially in the that parameter.\n Chain quality: We show a bound on the ratio of blocks in the chain of any honest party contributed by malicious parties. In particular, as the adversary's hashing power approaches ½, we show that blockchains are only guaranteed to have few, but still some, blocks contributed by honest parties.\n Chain growth: We quantify the number of blocks that are added to the blockchain during any given number of rounds during the execution of the protocol. (N.B.: This property, which in [2] was proven and used directly in the form of a lemma, was explicitly introduced in [3]. Identifying it as a separate property enables modular proofs of applications' properties.)\n The above properties hold assuming that all parties-honest and adversarial-\"wake up\" and start computing at the same time, or, alternatively, that they compute on a common random string (the \"genesis\" block) only made available at the exact time when the protocol execution is to begin. In this talk we also consider the question of whether such a trusted setup/behavioral assumption is necessary, answering it in the negative by presenting a Bitcoin-like blockchain protocol that is provably secure without trusted setup, and, further, overcomes such lack in a scalable way-i.e., with running time independent of the number of parties [4].\n A direct consequence of our construction above is that consensus can be solved directly by a blockchain protocol without trusted setup assuming an honest majority (in terms of computational power).", "title": "" }, { "docid": "ad58798807256cff2eff9d3befaf290a", "text": "Centrality indices are an essential concept in network analysis. For those based on shortest-path distances the computation is at least quadratic in the number of nodes, since it usually involves solving the single-source shortest-paths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices. ∗Research supported in part by DFG under grant Br 2158/2-3", "title": "" }, { "docid": "872a79a47e6a4d83e7440ea5e7126dee", "text": "We propose simple and extremely efficient methods for solving the Basis Pursuit problem min{‖u‖1 : Au = f, u ∈ R}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μ‖u‖1 + 1 2 ‖Au− f‖2, for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps, and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A> can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is solely based on such operations for solving the above unconstrained sub-problem, we were able to solve huge instances of compressed sensing problems quickly on a standard PC.", "title": "" }, { "docid": "5c056ba2e29e8e33c725c2c9dd12afa8", "text": "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.", "title": "" }, { "docid": "3023637fd498bb183dae72135812c304", "text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common", "title": "" }, { "docid": "02d8c55750904b7f4794139bcfa51693", "text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.", "title": "" }, { "docid": "00b85bd052a196b1f02d00f6ad532ed2", "text": "The book Build Your Own Database Driven Website Using PHP & MySQL by Kevin Yank provides a hands-on look at what's involved in building a database-driven Web site. The author does a good job of patiently teaching the reader how to install and configure PHP 5 and MySQL to organize dynamic Web pages and put together a viable content management system. At just over 350 pages, the book is rather small compared to a lot of others on the topic, but it contains all the essentials. The author employs excellent teaching techniques to set up the foundation stone by stone and then grouts everything solidly together later in the book. This book aims at intermediate and advanced Web designers looking to make the leap to server-side programming. The author assumes his readers are comfortable with simple HTML. He provides an excellent introduction to PHP and MySQL (including installation) and explains how to make them work together. The amount of material he covers guarantees that almost any reader will benefit.", "title": "" }, { "docid": "09c4b35650141dfaf6e945dd6460dcf6", "text": "H2 histamine receptors are localized postsynaptically in the CNS. The aim of this study was to evaluate the effects of acute (1 day) and prolonged (7 day) administration of the H2 histamine receptor antagonist, famotidine, on the anticonvulsant activity of conventional antiepileptic drugs (AEDs; valproate, carbamazepine, diphenylhydantoin and phenobarbital) against maximal electroshock (MES)-induced seizures in mice. In addition, the effects of these drugs alone or in combination with famotidine were studied on motor performance and long-term memory. The influence of H2 receptor antagonist on brain concentrations and free plasma levels of the antiepileptic drugs was also evaluated. After acute or prolonged administration of famotidine (at dose of 10mg/kg) the drug raised the threshold for electroconvulsions. No effect was observed on this parameter at lower doses. Famotidine (5mg/kg), given acutely, significantly enhanced the anticonvulsant activity of valproate, which was expressed by a decrease in ED50. After the 7-day treatment, famotidine (5mg/kg) increased the anticonvulsant activity of diphenylhydantoin against MES. Famotidine (5mg/kg), after acute and prolonged administration, combined with valproate, phenobarbital, diphenylhydantoin and carbamazepine did not alter their free plasma levels. In contrast, brain concentrations of valproate were elevated for 1-day treatment with famotidine (5mg/kg). Moreover, famotidine co-applied with AEDs, given prolonged, worsened motor coordination in mice treated with carbamazepine or diphenylhydantoin. In contrast this histamine antagonist, did not impair the performance of mice evaluated in the long-term memory task. The results of this study indicate that famotidine modifies the anticonvulsant activity of some antiepileptic drugs.", "title": "" }, { "docid": "984f7a2023a14efbbd5027abfc12a586", "text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.", "title": "" }, { "docid": "88a8f162017f80c17be58faad16a6539", "text": "Instruction List (IL) is a simple typed assembly language commonly used in embedded control. There is little tool support for IL and, although defined in the IEC 61131-3 standard, there is no formal semantics. In this work we develop a formal operational semantics. Moreover, we present an abstract semantics, which allows approximative program simulation for a (possibly infinte) set of inputs in one simulation run. We also extended this framework to an abstract interpretation based analysis, which is implemented in our tool Homer. All these analyses can be carried out without knowledge of formal methods, which is typically not present in the IL community.", "title": "" }, { "docid": "40bc405aaec0fd8563de84e163091325", "text": "The extremely tight binding between biotin and avidin or streptavidin makes labeling proteins with biotin a useful tool for many applications. BirA is the Escherichia coli biotin ligase that site-specifically biotinylates a lysine side chain within a 15-amino acid acceptor peptide (also known as Avi-tag). As a complementary approach to in vivo biotinylation of Avi-tag-bearing proteins, we developed a protocol for producing recombinant BirA ligase for in vitro biotinylation. The target protein was expressed as both thioredoxin and MBP fusions, and was released from the corresponding fusion by TEV protease. The liberated ligase was separated from its carrier using HisTrap HP column. We obtained 24.7 and 27.6 mg BirA ligase per liter of culture from thioredoxin and MBP fusion constructs, respectively. The recombinant enzyme was shown to be highly active in catalyzing in vitro biotinylation. The described protocol provides an effective means for making BirA ligase that can be used for biotinylation of different Avi-tag-bearing substrates.", "title": "" }, { "docid": "86177ff4fbc089fde87d1acd8452d322", "text": "Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life.", "title": "" }, { "docid": "d9830ad99cc9339d62f3c3f5ec1d460a", "text": "The notion of value and of value creation has raised interest over the last 30 years for both researchers and practitioners. Although several studies have been conducted in marketing, value remains and elusive and often ill-defined concept. A clear understanding of value and value determinants can increase the awareness in strategic decisions and pricing choices. Objective of this paper is to preliminary discuss the main kinds of entity that an ontology of economic value should deal with.", "title": "" }, { "docid": "bc1ff96ebc41bc3040bb254f1620b190", "text": "The paper presents a new generation of torque-controlled li ghtweight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center . I order to act in unstructured environments and interact with humans, the robots have design features an d co trol/software functionalities which distinguish them from classical robots, such as: load-to-weight ratio o f 1:1, torque sensing in the joints, active vibration damping, sensitive collision detection, as well as complia nt control on joint and Cartesian level. Due to the partially unknown properties of the environment, robustne s of planning and control with respect to environmental variations is crucial. After briefly describing the main har dware features, the paper focuses on showing how joint torque sensing (as a main feature of the robot) is conse quently used for achieving the above mentioned performance, safety, and robustness properties.", "title": "" }, { "docid": "d0b287d0bd41dedbbfa3357653389e9c", "text": "Credit scoring model have been developed by banks and researchers to improve the process of assessing credit worthiness during the credit evaluation process. The objective of credit scoring models is to assign credit risk to either a ‘‘good risk’’ group that is likely to repay financial obligation or a ‘‘bad risk’’ group who has high possibility of defaulting on the financial obligation. Construction of credit scoring models requires data mining techniques. Using historical data on payments, demographic characteristics and statistical techniques, credit scoring models can help identify the important demographic characteristics related to credit risk and provide a score for each customer. This paper illustrates using data mining to improve assessment of credit worthiness using credit scoring models. Due to privacy concerns and unavailability of real financial data from banks this study applies the credit scoring techniques using data of payment history of members from a recreational club. The club has been facing a problem of rising number in defaulters in their monthly club subscription payments. The management would like to have a model which they can deploy to identify potential defaulters. The classification performance of credit scorecard model, logistic regression model and decision tree model were compared. The classification error rates for credit scorecard model, logistic regression and decision tree were 27.9%, 28.8% and 28.1%, respectively. Although no model outperforms the other, scorecards are relatively much easier to deploy in practical applications. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8d21369604ad890704d535785c8e3171", "text": "With the integration of advanced computing and communication technologies, smart grid is considered as the next-generation power system, which promises self healing, resilience, sustainability, and efficiency to the energy critical infrastructure. The smart grid innovation brings enormous challenges and initiatives across both industry and academia, in which the security issue emerges to be a critical concern. In this paper, we present a survey of recent security advances in smart grid, by a data driven approach. Compared with existing related works, our survey is centered around the security vulnerabilities and solutions within the entire lifecycle of smart grid data, which are systematically decomposed into four sequential stages: 1) data generation; 2) data acquisition; 3) data storage; and 4) data processing. Moreover, we further review the security analytics in smart grid, which employs data analytics to ensure smart grid security. Finally, an effort to shed light on potential future research concludes this paper.", "title": "" }, { "docid": "7e8f116433e530032d31938703af1cd3", "text": "Background. This systematic review and meta-analysis Tathiane Larissa Lenzi, MSc, PhD; Anelise Fernandes Montagner, MSc, PhD; Fabio Zovico Maxnuck Soares, PhD; Rachel de Oliveira Rocha, MSc, PhD evaluated the effectiveness of professional topical fluoride application (gels or varnishes) on the reversal treatment of incipient enamel carious lesions in primary or permanent", "title": "" }, { "docid": "8324dc0dfcfb845739a22fb9321d5482", "text": "In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x), q(x) is updated to maximize the lower bound; p(x) is then updated one step with samples drawn from q(x) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where p(x) corresponds to the discriminator and q(x) corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions. 1", "title": "" }, { "docid": "3a5be5b365cfdc6f29646bf97953fc18", "text": "Fuzzy set methods have been used to model and manage uncertainty in various aspects of image processing, pattern recognition, and computer vision. High-level computer vision applications hold a great potential for fuzzy set theory because of its links to natural language. Linguistic scene description, a language-based interpretation of regions and their relationships, is one such application that is starting to bear the fruits of fuzzy set theoretic involvement. In this paper, we are expanding on two earlier endeavors. We introduce new families of fuzzy directional relations that rely on the computation of histograms of forces. These families preserve important relative position properties. They provide inputs to a fuzzy rule base that produces logical linguistic descriptions along with assessments as to the validity of the descriptions. Each linguistic output uses hedges from a dictionary of about 30 adverbs and other terms that can be tailored to individual users. Excellent results from several synthetic and real image examples show the applicability of this approach.", "title": "" }, { "docid": "37d3954ce00a1f9fd90c6adfde388ab1", "text": "Computational personality traits assessment is one of an interesting areas in affective computing. It becomes popular because personality identification can be used in many areas and get benefits. Such areas are business, politics, education, social media, medicine, and user interface design. The famous statement \"Face is a mirror of the mind\" proves that person's appearance depends on the inner aspects of a person. Conversely, Person's behavior and appearance describe the person's personality, so an analyze on appearance and behavior gives knowledge on personality traits. There are varieties of methods have been discovered by researchers to assess personality computationally with various machine learning algorithms. In this paper reviews methods and theories involved in psychological traits assessment and evolution of computational psychological traits assessment with different machine learning algorithms and different feature sets.", "title": "" } ]
scidocsrr
d18596439ea78601495e51fc3ee92d28
Load Balancing in Cloud Computing
[ { "docid": "1231b1e1e0ace856815e32dbdc38a113", "text": "Availability of cloud systems is one of the main concerns of cloud computing. The term, availability of clouds, is mainly evaluated by ubiquity of information comparing with resource scaling. In clouds, load balancing, as a method, is applied across different data centers to ensure the network availability by minimizing use of computer hardware, software failures and mitigating recourse limitations. This work discusses the load balancing in cloud computing and then demonstrates a case study of system availability based on a typical Hospital Database Management solution.", "title": "" } ]
[ { "docid": "1669a9cb7dabaa778fbb367bbba77232", "text": "Functional significance of delta oscillations is not fully understood. One way to approach this question would be from an evolutionary perspective. Delta oscillations dominate the EEG of waking reptiles. In humans, they are prominent only in early developmental stages and during slow-wave sleep. Increase of delta power has been documented in a wide array of developmental disorders and pathological conditions. Considerable evidence on the association between delta waves and autonomic and metabolic processes hints that they may be involved in integration of cerebral activity with homeostatic processes. Much evidence suggests the involvement of delta oscillations in motivation. They increase during hunger, sexual arousal, and in substance users. They also increase during panic attacks and sustained pain. In cognitive domain, they are implicated in attention, salience detection, and subliminal perception. This evidence shows that delta oscillations are associated with evolutionary old basic processes, which in waking adults are overshadowed by more advanced processes associated with higher frequency oscillations. The former processes rise in activity, however, when the latter are dysfunctional.", "title": "" }, { "docid": "733d0b179ec81a283166830be087546c", "text": "This paper presents an interactive face retrieval framework for clarifying an image representation envisioned by a user. Our system is designed for a situation in which the user wishes to find a person but has only visual memory of the person. We address a critical challenge of image retrieval across the user's inputs. Instead of target-specific information, the user can select several images (or a single image) that are similar to an impression of the target person the user wishes to search for. Based on the user's selection, our proposed system automatically updates a deep convolutional neural network. By interactively repeating these process (human-in-the-loop optimization), the system can reduce the gap between human-based similarities and computer-based similarities and estimate the target image representation. We ran user studies with 10 subjects on a public database and confirmed that the proposed framework is effective for clarifying the image representation envisioned by the user easily and quickly.", "title": "" }, { "docid": "5fbdeba4f91d31a9a3555109872ff250", "text": "Wepresent new results for the Frank–Wolfemethod (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-calledFWgaps. Lastly,wepresent complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions. Mathematics Subject Classification 90C06 · 90C25 · 65K05", "title": "" }, { "docid": "7afe5c6affbaf30b4af03f87a018a5b3", "text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.", "title": "" }, { "docid": "d495f9ae71492df9225249147563a3d9", "text": "The control of a PWM rectifier with LCL-filter using a minimum number of sensors is analyzed. In addition to the DC-link voltage either the converter or line current is measured. Two different ways of current control are shown, analyzed and compared by simulations as well as experimental investigations. Main focus is spent on active damping of the LCL filter resonance and on robustness against line inductance variations.", "title": "" }, { "docid": "4105ebe68ca25c863f77dde3ff94dcdc", "text": "This paper deals with the increasingly important issue of proper handling of information security for electric power utilities. It is based on the efforts of CIGRE Joint Working Group (JWG) D2/B3/C2-01 on \"Security for Information Systems and Intranets in Electric Power System\" carried out between 2003 and 2006. The JWG has produced a technical brochure (TB), where the purpose to raise the awareness of information and cybersecurity in electric power systems, and gives some guidance on how to solve the security problem by focusing on security domain modeling, risk assessment methodology, and security framework building. Here in this paper, the focus is on the issue of awareness and to highlight some steps to achieve a framework for cybersecurity management. Also, technical considerations of some communication systems for substation automation are studied. Finally, some directions for further works in this vast area of information and cybersecurity are given.", "title": "" }, { "docid": "f49364d463c3225e52e22c8c043e9590", "text": "Palpation is a physical examination technique where objects, e.g., organs or body parts, are touched with fingers to determine their size, shape, consistency and location. Many medical procedures utilize palpation as a supplementary interaction technique and it can be therefore considered as an essential basic method. However, palpation is mostly neglected in medical training simulators, with the exception of very specialized simulators that solely focus on palpation, e.g., for manual cancer detection. In this article we propose a novel approach to enable haptic palpation interaction for virtual reality-based medical simulators. The main contribution is an extensive user study conducted with a large group of medical experts. To provide a plausible simulation framework for this user study, we contribute a novel and detailed interaction algorithm for palpation with tissue dragging, which utilizes a multi-object force algorithm to support multiple layers of anatomy and a pulse force algorithm for simulation of an arterial pulse. Furthermore, we propose a modification for an off-the-shelf haptic device by adding a lightweight palpation pad to support a more realistic finger grip configuration for palpation tasks. The user study itself has been conducted on a medical training simulator prototype with a specific procedure from regional anesthesia, which strongly depends on palpation. The prototype utilizes a co-rotational finite-element approach for soft tissue simulation and provides bimanual interaction by combining the aforementioned techniques with needle insertion for the other hand. The results of the user study suggest reasonable face validity of the simulator prototype and in particular validate medical plausibility of the proposed palpation interaction algorithm.", "title": "" }, { "docid": "ff59e2a5aa984dec7805a4d9d55e69e5", "text": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.", "title": "" }, { "docid": "cac86f06684dd8442eb2281d1fea213f", "text": "Many existing Machine Learning (ML) based Android malware detection approaches use a variety of features such as security-sensitive APIs, system calls, control-flow structures and information flows in conjunction with ML classifiers to achieve accurate detection. Each of these feature sets provides a unique semantic perspective (or view) of apps’ behaviors with inherent strengths and limitations. Meaning, some views are more amenable to detect certain attacks but may not be suitable to characterize several other attacks. Most of the existing malware detection approaches use only one (or a selected few) of the aforementioned feature sets which prevents them from detecting a vast majority of attacks. Addressing this limitation, we propose MKLDroid, a unified framework that systematically integrates multiple views of apps for performing comprehensive malware detection and malicious code localization. The rationale is that, while a malware app can disguise itself in some views, disguising in every view while maintaining malicious intent will be much harder. MKLDroid uses a graph kernel to capture structural and contextual information from apps’ dependency graphs and identify malice code patterns in each view. Subsequently, it employs Multiple Kernel Learning (MKL) to find a weighted combination of the views which yields the best detection accuracy. Besides multi-view learning, MKLDroid’s unique and salient trait is its ability to locate fine-grained malice code portions in dependency graphs (e.g., methods/classes). Malicious code localization caters several important applications such as supporting human analysts studying malware behaviors, engineering malware signatures, and other counter-measures. Through our large-scale experiments on several datasets (incl. wild apps), we demonstrate that MKLDroid outperforms three state-of-the-art techniques consistently, in terms of accuracy while maintaining comparable efficiency. In our malicious code localization experiments on a dataset of repackaged malware, MKLDroid was able to identify all the malice classes with 94% average recall. Our work opens up two new avenues in malware research: (i) enables the research community to elegantly look at Android malware behaviors in multiple perspectives simultaneously, and (ii) performing precise and scalable malicious code localization.", "title": "" }, { "docid": "e26a155425b3691629649cd32aa8648e", "text": "Technologies in autonomous vehicles have seen dramatic advances in recent years; however, it still lacks of robust perception systems for car detection. With the recent development in deep learning research, in this paper, we propose a LIDAR and vision fusion system for car detection through the deep learning framework. It consists of three major parts. The first part generates seed proposals for potential car locations in the image by taking LIDAR point cloud into account. The second part refines the location of the proposal boxes by exploring multi-layer information in the proposal network and the last part carries out the final detection task through a detection network which shares part of the layers with the proposal network. The evaluation shows that the proposed framework is able to generate high quality proposal boxes more efficiently (77.6% average recall) and detect the car at the state of the art accuracy (89.4% average precision). With further optimization of the framework structure, it has great potentials to be implemented onto the autonomous vehicle.", "title": "" }, { "docid": "67c444b9538ccfe7a2decdd11523dcd5", "text": "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.", "title": "" }, { "docid": "4f1a3aa69ec17e31164531346d6739e7", "text": "While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations.", "title": "" }, { "docid": "7c4531a6b23003b8888038fea66d09fa", "text": "Mapping and monitoring the Earth’s surface is subject to various application fields. This three-dimensional problem is usually split into a two-dimensional planar description of the Earth’s surface supplemented with the height information provided as separate digital elevation models. However, from the global point of view there is still a tremendous need for suitable height information at a resolution level of about 1 to 3 arc seconds. The Shuttle Radar Topography Mission (SRTM) will help to fill this gap by providing high quality digital elevation model (DEM). In order to provide terrain information for areas where either SRTM data will not be available or the corresponding resolution is not sufficient a combination with other sources will be required. The German Remote Sensing Data Center (DFD) of DLR intends to implement such a global DEM service. All SRTM/X-SAR data will be processed to elevation data and will serve as the backbone as it provides a global net of homogenous elevation information. This net can be used for the absolute orientation of other DEMs as geometric reference, but also for the improvement of the height quality by integrating elevation data from a variety of other sources by DEM fusion and mosaicking techniques. The paper describes the principles and corresponding accuracy of space borne missions for the derivation of DEMs. The main focus is on the DEM products of SRTM/X-SAR. Furthermore, the ERS-1/2 tandem configuration and the MOMS-2P mission are described. The technique to combine multi-source DEMs is outlined, which is based on the concept of height error maps. The method is illustrated by practical examples. Finally, an outlook is given on further investigations.", "title": "" }, { "docid": "9970a23aedeb1a613a0909c28c35222e", "text": "Imaging radars incorporating digital beamforming (DBF) typically require a uniform linear antenna array (ULA). However, using a large number of parallel receivers increases system complexity and cost. A switched antenna array can provide a similar performance at a lower expense. This paper describes an active switched antenna array with 32 integrated planar patch antennas illuminating a cylindrical lens. The array can be operated over a frequency range from 73 GHz–81 GHz. Together with a broadband FMCW frontend (Frequency Modulated Continuous Wave) a DBF radar was implemented. The design of the array is presented together with measurement results.", "title": "" }, { "docid": "7fd5f3461742db10503dd5e3d79fe3ed", "text": "There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.", "title": "" }, { "docid": "7a1f409eea5e0ff89b51fe0a26d6db8d", "text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.", "title": "" }, { "docid": "361dc8037ebc30cd2f37f4460cf43569", "text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.", "title": "" }, { "docid": "9b9425132e89d271ed6baa0dbc16b941", "text": "Although personalized recommendation has been investigated for decades, the wide adoption of Latent Factor Models (LFM) has made the explainability of recommendations a critical issue to both the research community and practical application of recommender systems. For example, in many practical systems the algorithm just provides a personalized item recommendation list to the users, without persuasive personalized explanation about why such an item is recommended while another is not. Unexplainable recommendations introduce negative effects to the trustworthiness of recommender systems, and thus affect the effectiveness of recommendation engines. In this work, we investigate explainable recommendation in aspects of data explainability, model explainability, and result explainability, and the main contributions are as follows: 1. Data Explainability: We propose Localized Matrix Factorization (LMF) framework based Bordered Block Diagonal Form (BBDF) matrices, and further applied this technique for parallelized matrix factorization. 2. Model Explainability: We propose Explicit Factor Models (EFM) based on phrase-level sentiment analysis, as well as dynamic user preference modeling based on time series analysis. In this work, we extract product features and user opinions towards different features from large-scale user textual reviews based on phrase-level sentiment analysis techniques, and introduce the EFM approach for explainable model learning and recommendation. 3. Economic Explainability: We propose the Total Surplus Maximization (TSM) framework for personalized recommendation, as well as the model specification in different types of online applications. Based on basic economic concepts, we provide the definitions of utility, cost, and surplus in the application scenario of Web services, and propose the general framework of web total surplus calculation and maximization.", "title": "" }, { "docid": "aa7d94bebbd988af48bc7cb9f5e35a39", "text": "Over the recent years, embedding methods have attracted increasing focus as a means for knowledge graph completion. Similarly, rule-based systems have been studied for this task in the past. What is missing so far is a common evaluation that includes more than one type of method. We close this gap by comparing representatives of both types of systems in a frequently used evaluation protocol. Leveraging the explanatory qualities of rule-based systems, we present a fine-grained evaluation that gives insight into characteristics of the most popular datasets and points out the different strengths and shortcomings of the examined approaches. Our results show that models such as TransE, RESCAL or HolE have problems in solving certain types of completion tasks that can be solved by a rulebased approach with high precision. At the same time, there are other completion tasks that are difficult for rule-based systems. Motivated by these insights, we combine both families of approaches via ensemble learning. The results support our assumption that the two methods complement each other in a beneficial way.", "title": "" }, { "docid": "badb04b676d3dab31024e8033fc8aec4", "text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.", "title": "" } ]
scidocsrr
f00685a8bce9662a359a7720a2af752b
Performance of Medium-Voltage DC-Bus PV System Architecture Utilizing High-Gain DC–DC Converter
[ { "docid": "d06e4f97786f8ecf9694ed270a36c24a", "text": "In this paper, an improved maximum power point (MPP) tracking (MPPT) with better performance based on voltage-oriented control (VOC) is proposed to solve a fast-changing irradiation problem. In VOC, a cascaded control structure with an outer dc link voltage control loop and an inner current control loop is used. The currents are controlled in a synchronous orthogonal d,q frame using a decoupled feedback control. The reference current of proportional-integral (PI) d-axis controller is extracted from the dc-side voltage regulator by applying the energy-balancing control. Furthermore, in order to achieve a unity power factor, the q-axis reference is set to zero. The MPPT controller is applied to the reference of the outer loop control dc voltage photovoltaic (PV). Without PV array power measurement, the proposed MPPT identifies the correct direction of the MPP by processing the d-axis current reflecting the power grid side and the signal error of the PI outer loop designed to only represent the change in power due to the changing atmospheric conditions. The robust tracking capability under rapidly increasing and decreasing irradiance is verified experimentally with a PV array emulator. Simulations and experimental results demonstrate that the proposed method provides effective, fast, and perfect tracking.", "title": "" } ]
[ { "docid": "86ef637e1c2d4a8907bc25063e2dc246", "text": "Motorcycle accidents have been rapidly growing throughout the years in many countries. Due to various social and economic factors, this type of vehicle is becoming increasingly popular. The helmet is the main safety equipment of motorcyclists, but many drivers do not use it. If an motorcyclist is without helmet an accident can be fatal. This paper aims to explain and illustrate an automatic method for motorcycles detection and classification on public roads and a system for automatic detection of motorcyclists without helmet. For this, a hybrid descriptor for features extraction is proposed based in Local Binary Pattern, Histograms of Oriented Gradients and the Hough Transform descriptors. Traffic images captured by cameras were used. The best result obtained from classification was an accuracy rate of 0.9767, and the best result obtained from helmet detection was an accuracy rate of 0.9423.", "title": "" }, { "docid": "c5e7e7daf6c910db006d45150c97c4d1", "text": "This paper presents the implementation of real-time automatic speech recognition (ASR) for portable devices. The speech recognition is performed offline using PocketSphinx which is the implementation of Carnegie Mellon University's Sphinx speech recognition engine for portable devices. In this work, machine Learning approach is used which converts graphemes into phonemes using the TensorFlow's Sequence-to-Sequence model to produce the pronunciations of words. This paper also explains the implementation of statistical language model for ASR. The novelty of ASR is its offline speech recognition and thus requires no Internet connection compared to other related works. A speech recognition service currently provides the cloud based processing of speech and therefore has access to the speech data of users. However, the speech is processed on the handheld device in offline ASR and therefore enhances the privacy of users.", "title": "" }, { "docid": "0d0eb6ed5dff220bc46ffbf87f90ee59", "text": "Objectives. The aim of this review was to investigate whether alternating hot–cold water treatment is a legitimate training tool for enhancing athlete recovery. A number of mechanisms are discussed to justify its merits and future research directions are reported. Alternating hot–cold water treatment has been used in the clinical setting to assist in acute sporting injuries and rehabilitation purposes. However, there is overwhelming anecdotal evidence for it’s inclusion as a method for post exercise recovery. Many coaches, athletes and trainers are using alternating hot–cold water treatment as a means for post exercise recovery. Design. A literature search was performed using SportDiscus, Medline and Web of Science using the key words recovery, muscle fatigue, cryotherapy, thermotherapy, hydrotherapy, contrast water immersion and training. Results. The physiologic effects of hot–cold water contrast baths for injury treatment have been well documented, but its physiological rationale for enhancing recovery is less known. Most experimental evidence suggests that hot–cold water immersion helps to reduce injury in the acute stages of injury, through vasodilation and vasoconstriction thereby stimulating blood flow thus reducing swelling. This shunting action of the blood caused by vasodilation and vasoconstriction may be one of the mechanisms to removing metabolites, repairing the exercised muscle and slowing the metabolic process down. Conclusion. To date there are very few studies that have focussed on the effectiveness of hot–cold water immersion for post exercise treatment. More research is needed before conclusions can be drawn on whether alternating hot–cold water immersion improves recuperation and influences the physiological changes that characterises post exercise recovery. q 2003 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7afe5c6affbaf30b4af03f87a018a5b3", "text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.", "title": "" }, { "docid": "2079bd806c3b6b9de28b0a3d158f63f3", "text": "Beam search is a desirable choice of test-time decoding algorithm for neural sequence models because it potentially avoids search errors made by simpler greedy methods. However, typical cross entropy training procedures for these models do not directly consider the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined, this “direct loss” objective is itself discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.", "title": "" }, { "docid": "d612aeb7f7572345bab8609571f4030d", "text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.", "title": "" }, { "docid": "8327cb7a8d39ce8f8f982aa38cdd517e", "text": "Although many valuable visualizations have been developed to gain insights from large data sets, selecting an appropriate visualization for a specific data set and goal remains challenging for non-experts. In this paper, we propose a novel approach for knowledge-assisted, context-aware visualization recommendation. Both semantic web data and visualization components are annotated with formalized visualization knowledge from an ontology. We present a recommendation algorithm that leverages those annotations to provide visualization components that support the users’ data and task. We successfully proved the practicability of our approach by integrating it into two research prototypes. Keywords-recommendation, visualization, ontology, mashup", "title": "" }, { "docid": "13a4dccde0ae401fc39b50469a0646b6", "text": "The stability theorem for persistent homology is a central result in topological data analysis. While the original formulation of the result concerns the persistence barcodes of R-valued functions, the result was later cast in a more general algebraic form, in the language of persistence modules and interleavings. In this paper, we establish an analogue of this algebraic stability theorem for zigzag persistence modules. To do so, we functorially extend each zigzag persistence module to a two-dimensional persistence module, and establish an algebraic stability theorem for these extensions. One part of our argument yields a stability result for free two-dimensional persistence modules. As an application of our main theorem, we strengthen a result of Bauer et al. on the stability of the persistent homology of Reeb graphs. Our main result also yields an alternative proof of the stability theorem for level set persistent homology of Carlsson et al.", "title": "" }, { "docid": "397d6f645f5607140cf7d16597b8ec83", "text": "OBJECTIVES\nTo determine if differences between dyslexic and typical readers in their reading scores and verbal IQ are evident as early as first grade and whether the trajectory of these differences increases or decreases from childhood to adolescence.\n\n\nSTUDY DESIGN\nThe subjects were the 414 participants comprising the Connecticut Longitudinal Study, a sample survey cohort, assessed yearly from 1st to 12th grade on measures of reading and IQ. Statistical analysis employed longitudinal models based on growth curves and multiple groups.\n\n\nRESULTS\nAs early as first grade, compared with typical readers, dyslexic readers had lower reading scores and verbal IQ, and their trajectories over time never converge with those of typical readers. These data demonstrate that such differences are not so much a function of increasing disparities over time but instead because of differences already present in first grade between typical and dyslexic readers.\n\n\nCONCLUSIONS\nThe achievement gap between typical and dyslexic readers is evident as early as first grade, and this gap persists into adolescence. These findings provide strong evidence and impetus for early identification of and intervention for young children at risk for dyslexia. Implementing effective reading programs as early as kindergarten or even preschool offers the potential to close the achievement gap.", "title": "" }, { "docid": "e8b5f7f67b5095873419df4984c19333", "text": "A series of fluorescent pH probes based on the spiro-cyclic rhodamine core, aminomethylrhodamines (AMR), was synthesized and the effect of cycloalkane ring size on the acid/base properties of the AMR system was explored. The study involved a series of rhodamine 6G (cAMR6G) and rhodamine B (cAMR) pH probes with cycloalkane ring sizes from C-3 to C-6 on the spiro-cyclic amino group. It is known that the pKa value of cycloalkylamines can be tuned by different ring sizes in accordance with the Baeyer ring strain theory. Smaller ring amines have lower pKa value, i.e., they are less basic, such that the relative order in cycloalkylamine basicity is: cyclohexyl > cyclopentyl > cyclobutyl > cyclopropyl. Herein, it was found that the pKa values of the cAMR and cAMR6G systems can also be predicted by Baeyer ring strain theory. The pKa values for the cAMR6G series were shown to be higher than the cAMR series by a value of approximately 1.", "title": "" }, { "docid": "a2f8cb66e02e87861a322ce50fef97af", "text": "The conversion of biomass by gasification into a fuel suitable for use in a gas engine increases greatly the potential usefulness of biomass as a renewable resource. Gasification is a robust proven technology that can be operated either as a simple, low technology system based on a fixed-bed gasifier, or as a more sophisticated system using fluidized-bed technology. The properties of the biomass feedstock and its preparation are key design parameters when selecting the gasifier system. Electricity generation using a gas engine operating on gas produced by the gasification of biomass is applicable equally to both the developed world (as a means of reducing greenhouse gas emissions by replacing fossil fuel) and to the developing world (by providing electricity in rural areas derived from traditional biomass).", "title": "" }, { "docid": "4d2461f0fe7cd85ed2d4678f3a3b164b", "text": "BACKGROUND\nProblematic Internet addiction or excessive Internet use is characterized by excessive or poorly controlled preoccupations, urges, or behaviors regarding computer use and Internet access that lead to impairment or distress. Currently, there is no recognition of internet addiction within the spectrum of addictive disorders and, therefore, no corresponding diagnosis. It has, however, been proposed for inclusion in the next version of the Diagnostic and Statistical Manual of Mental Disorder (DSM).\n\n\nOBJECTIVE\nTo review the literature on Internet addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment.\n\n\nMETHODS\nReview of published literature between 2000-2009 in Medline and PubMed using the term \"internet addiction.\n\n\nRESULTS\nSurveys in the United States and Europe have indicated prevalence rate between 1.5% and 8.2%, although the diagnostic criteria and assessment questionnaires used for diagnosis vary between countries. Cross-sectional studies on samples of patients report high comorbidity of Internet addiction with psychiatric disorders, especially affective disorders (including depression), anxiety disorders (generalized anxiety disorder, social anxiety disorder), and attention deficit hyperactivity disorder (ADHD). Several factors are predictive of problematic Internet use, including personality traits, parenting and familial factors, alcohol use, and social anxiety.\n\n\nCONCLUSIONS AND SCIENTIFIC SIGNIFICANCE\nAlthough Internet-addicted individuals have difficulty suppressing their excessive online behaviors in real life, little is known about the patho-physiological and cognitive mechanisms responsible for Internet addiction. Due to the lack of methodologically adequate research, it is currently impossible to recommend any evidence-based treatment of Internet addiction.", "title": "" }, { "docid": "265b352775956004436b438574ee2d91", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.", "title": "" }, { "docid": "87aa88d141bd7b3fcf2ae83708003e93", "text": "The bin packing problem, in which a set of items of various sizes has to be packed into a minimum number of identical bins, has been extensively studied during the past fifteen years, mainly with the aim of finding fast heuristic algorithms to provide good approximate solutions. We present lower bounds and a dominance criterion and derive a reduction algorithm. Lower bounds are evaluated through an extension of the concept of worst-case performance. For both lower bounds and reduction algorithm an experimental analysis is provided.", "title": "" }, { "docid": "522f90c716a43f376780ec3e3ec1574c", "text": "Noncontact estimation of respiratory pattern (RP) and respiratory rate (RR) has multiple applications. Existing methods for RP and RR measurement fall into one of the three categories—1) estimation through nasal air flow measurement, 2) estimation from video-based remote photoplethysmography, and 2) estimation by measurement of motion induced by respiration using motion detectors. However, these methods require specialized sensors, are computationally expensive, and/or critically depend on selection of a region of interest (ROI) for processing. In this paper, a general framework is described for estimating a periodic signal driving noisy linear time-invariant (LTI) channels connected in parallel with unknown dynamics. The method is then applied to derive a computationally inexpensive method for estimating RP using two-dimensional cameras that does not critically depend on ROI. Specifically, RP is estimated by imaging changes in the reflected light caused by respiration-induced motion. Each spatial location in the field of view of the camera is modeled as a noise-corrupted LTI measurement channel with unknown system dynamics, driven by a single generating respiratory signal. Estimation of RP is cast as a blind deconvolution problem and is solved through a method comprising subspace projection and statistical aggregation. Experiments are carried out on 31 healthy human subjects by generating multiple RPs and comparing the proposed estimates with simultaneously acquired ground truth from an impedance pneumograph device. The proposed estimator agrees well with the ground truth in terms of correlation measures, despite variability in clothing pattern, camera angle, and ROI.", "title": "" }, { "docid": "771fd500518c9c0ae048f3bc883b5eca", "text": "Gestures are important for communicating information among the human. Nowadays new technologies of Human Computer Interaction (HCI) are being developed to deliver user's command to the robots. Users can interact with machines through hand, head, facial expressions, voice and touch. The objective of this paper is to use one of the important modes of interaction i.e. hand gestures to control the robot or for offices and household applications. Hand gesture detection algorithms are based on various machine learning methods such as neural networks, support vector machine, and Adaptive Boosting (AdaBoost). Among these methods, AdaBoost based hand-pose detectors are trained with a reduced Haar-like feature set to make the detector robust. The corresponding context-free grammar based proposed method gives effective real time performance with great accuracy and robustness for more than four hand gestures. Rectangles are creating some problem due to that we have also implement the alternate representation method for same gestures i.e. fingertip detection using convex hull algorithm.", "title": "" }, { "docid": "18233af1857390bff51d2e713bc766d9", "text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.", "title": "" }, { "docid": "08196718e17bfcdcecea60b0fb735638", "text": "Atari games are an excellent testbed for studying intelligent behavior, as they offer a range of tasks that differ widely in their visual representation, game dynamics, and goals presented to an agent. The last two years have seen a spate of research into artificial agents that use a single algorithm to learn to play these games. The best of these artificial agents perform at better-than-human levels on most games, but require hundreds of hours of game-play experience to produce such behavior. Humans, on the other hand, can learn to perform well on these tasks in a matter of minutes. In this paper we present data on human learning trajectories for several Atari games, and test several hypotheses about the mechanisms that lead to such rapid learning.", "title": "" }, { "docid": "5b92aa85d93c2fbb09df5a0b96fc9c1f", "text": "Social networking services have been prevalent at many online communities such as Twitter.com and Weibo.com, where millions of users keep interacting with each other every day. One interesting and important problem in the social networking services is to rank users based on their vitality in a timely fashion. An accurate ranking list of user vitality could benefit many parties in social network services such as the ads providers and site operators. Although it is very promising to obtain a vitality-based ranking list of users, there are many technical challenges due to the large scale and dynamics of social networking data. In this paper, we propose a unique perspective to achieve this goal, which is quantifying user vitality by analyzing the dynamic interactions among users on social networks. Examples of social network include but are not limited to social networks in microblog sites and academical collaboration networks. Intuitively, if a user has many interactions with his friends within a time period and most of his friends do not have many interactions with their friends simultaneously, it is very likely that this user has high vitality. Based on this idea, we develop quantitative measurements for user vitality and propose our first algorithm for ranking users based vitality. Also, we further consider the mutual influence between users while computing the vitality measurements and propose the second ranking algorithm, which computes user vitality in an iterative way. Other than user vitality ranking, we also introduce a vitality prediction problem, which is also of great importance for many applications in social networking services. Along this line, we develop a customized prediction model to solve the vitality prediction problem. To evaluate the performance of our algorithms, we collect two dynamic social network data sets. The experimental results with both data sets clearly demonstrate the advantage of our ranking and prediction methods.", "title": "" }, { "docid": "199541aa317b2ebb4d40906d974ce5f2", "text": "Experimental evidence has accumulated to suggest that biologically efficacious informational effects can be derived mimicking active compounds solely through electromagnetic distribution upon aqueous systems affecting biological systems. Empirically rigorous demonstrations of antimicrobial agent associated electromagnetic informational inhibition of MRSA, Entamoeba histolytica, Trichomonas vaginalis, Candida albicans and a host of other important and various reported effects have been evidenced, such as the electro-informational transfer of retinoic acid influencing human neuroblastoma cells and stem teratocarcinoma cells. Cell proliferation and differentiation effects from informationally affected fields interactive with aqueous systems are measured via microscopy, statistical analysis, reverse transcription polymerase chain reaction and other techniques. Information associated with chemical compounds affects biological aqueous systems, sans direct systemic exposure to the source molecule. This is a quantum effect, based on the interactivity between electromagnetic fields, and aqueous ordered coherence domains. The encoding of aqueous systems and tissue by photonic transfer and instantiation of information rather than via direct exposure to potentially toxic drugs and physical substances holds clear promise of creating inexpensive non-toxic medical treatments. Corresponding author.", "title": "" } ]
scidocsrr
0a8ffc3e525a9e15863c7e0d84c7a2d0
SPECTRAL BASIS NEURAL NETWORKS FOR REAL-TIME TRAVEL TIME FORECASTING
[ { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "8b1b0ee79538a1f445636b0798a0c7ca", "text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.", "title": "" } ]
[ { "docid": "b01b7d382f534812f07faaaa1442b3f9", "text": "In this paper, we first establish new relationships in matrix forms among discrete Fourier transform (DFT), generalized DFT (GDFT), and various types of discrete cosine transform (DCT) and discrete sine transform (DST) matrices. Two new independent tridiagonal commuting matrices for each of DCT and DST matrices of types I, IV, V, and VIII are then derived from the existing commuting matrices of DFT and GDFT. With these new commuting matrices, the orthonormal sets of Hermite-like eigenvectors for DCT and DST matrices can be determined and the discrete fractional cosine transform (DFRCT) and the discrete fractional sine transform (DFRST) are defined. The relationships among the discrete fractional Fourier transform (DFRFT), fractional GDFT, and various types of DFRCT and DFRST are developed to reduce computations for DFRFT and fractional GDFT.", "title": "" }, { "docid": "d60fb42ca7082289c907c0e2e2c343fc", "text": "As mentioned in the paper, the direct optimization of group assignment variables with reduced gradients yields faster convergence than optimization via softmax reparametrization. Figure 1 shows the distribution plots, which are provided by TensorFlow, of class-to-group assignments using two methods. Despite starting with lower variance, when the distribution of group assignment variables diverged to", "title": "" }, { "docid": "7380419cc9c5eac99e8d46e73df78285", "text": "This paper discusses the classification of books purely based on cover image and title, without prior knowledge or context of author and origin. Several methods were implemented to assess the ability to distinguish books based on only these two characteristics. First we used a color-based distribution approach. Then we implemented transfer learning with convolutional neural networks on the cover image along with natural language processing on the title text. We found that image and text modalities yielded similar accuracy which indicate that we have reached a certain threshold in distinguishing between the genres that we have defined. This was confirmed by the accuracy being quite close to the human oracle accuracy.", "title": "" }, { "docid": "793d41551a918a113f52481ff3df087e", "text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.", "title": "" }, { "docid": "8c0d117602ecadee24215f5529e527c6", "text": "We present the first open-set language identification experiments using one-class classification models. We first highlight the shortcomings of traditional feature extraction methods and propose a hashing-based feature vectorization approach as a solution. Using a dataset of 10 languages from different writing systems, we train a One-Class Support Vector Machine using only a monolingual corpus for each language. Each model is evaluated against a test set of data from all 10 languages and we achieve an average F-score of 0.99, demonstrating the effectiveness of this approach for open-set language identification.", "title": "" }, { "docid": "478aa46b9dafbc111c1ff2cdb03a5a77", "text": "This paper presents results from recent work using structured light laser profile imaging to create high resolution bathymetric maps of underwater archaeological sites. Documenting the texture and structure of submerged sites is a difficult task and many applicable acoustic and photographic mapping techniques have recently emerged. This effort was completed to evaluate laser profile imaging in comparison to stereo imaging and high frequency multibeam mapping. A ROV mounted camera and inclined 532 nm sheet laser were used to create profiles of the bottom that were then merged into maps using platform navigation data. These initial results show very promising resolution in comparison to multibeam and stereo reconstructions, particularly in low contrast scenes. At the test sites shown here there were no significant complications related to scattering or attenuation of the laser sheet by the water. The resulting terrain was gridded at 0.25 cm and shows overall centimeter level definition. The largest source of error was related to the calibration of the laser and camera geometry. Results from three small areas show the highest resolution 3D models of a submerged archaeological site to date and demonstrate that laser imaging will be a viable method for accurate three dimensional site mapping and documentation.", "title": "" }, { "docid": "2876086e4431e8607d5146f14f0c29dc", "text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.", "title": "" }, { "docid": "d362b36e0c971c43856a07b7af9055f3", "text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,", "title": "" }, { "docid": "47ac4b546fe75f2556a879d6188d4440", "text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.", "title": "" }, { "docid": "587f1510411636090bc192b1b9219b58", "text": "Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.", "title": "" }, { "docid": "cdf2235bea299131929700406792452c", "text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.", "title": "" }, { "docid": "e33d34d0fbc19dbee009134368e40758", "text": "Quantum metrology exploits quantum phenomena to improve the measurement sensitivity. Theoretical analysis shows that quantum measurement can break through the standard quantum limits and reach super sensitivity level. Quantum radar systems based on quantum measurement can fufill not only conventional target detection and recognition tasks but also capable of detecting and identifying the RF stealth platform and weapons systems. The theoretical basis, classification, physical realization of quantum radar is discussed comprehensively in this paper. And the technology state and open questions of quantum radars is reviewed at the end.", "title": "" }, { "docid": "06b4bfebe295e3dceadef1a842b2e898", "text": "Constant changes in the economic environment, where globalization and the development of the knowledge economy act as drivers, are systematically pushing companies towards the challenge of accessing external markets. Web localization constitutes a new field of study and professional intervention. From the translation perspective, localization equates to the website being adjusted to the typological, discursive and genre conventions of the target culture, adapting that website to a different language and culture. This entails much more than simply translating the content of the pages. The content of a webpage is made up of text, images and other multimedia elements, all of which have to be translated and subjected to cultural adaptation. A case study has been carried out to analyze the current presence of localization within Spanish SMEs from the chemical sector. Two types of indicator have been established for evaluating the sample: indicators for evaluating company websites (with a Likert scale from 0–4) and indicators for evaluating web localization (0–2 scale). The results show overall website quality is acceptable (2.5 points out of 4). The higher rating has been obtained by the system quality (with 2.9), followed by information quality (2.7 points) and, lastly, service quality (1.9 points). In the web localization evaluation, the contact information aspects obtain 1.4 points, the visual aspect 1.04, and the navigation aspect was the worse considered (0.37). These types of analysis facilitate the establishment of practical recommendations aimed at SMEs in order to increase their international presence through the localization of their websites.", "title": "" }, { "docid": "3cae5c0440536b95cf1d0273071ad046", "text": "Android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system.\n This paper presents VetDroid, a dynamic analysis platform for reconstructing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid features a systematic framework to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1,249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.", "title": "" }, { "docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9", "text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.", "title": "" }, { "docid": "cf506587f2699d88e4a2e0be36ccac41", "text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.", "title": "" }, { "docid": "89c85642fc2e0b1f10c9a13b19f1d833", "text": "Many current successful Person Re-Identification(ReID) methods train a model with the softmax loss function to classify images of different persons and obtain the feature vectors at the same time. However, the underlying feature embedding space is ignored. In this paper, we use a modified softmax function, termed Sphere Softmax, to solve the classification problem and learn a hypersphere manifold embedding simultaneously. A balanced sampling strategy is also introduced. Finally, we propose a convolutional neural network called SphereReID adopting Sphere Softmax and training a single model end-to-end with a new warming-up learning rate schedule on four challenging datasets including Market-1501, DukeMTMC-reID, CHHK-03, and CUHK-SYSU. Experimental results demonstrate that this single model outperforms the state-of-the-art methods on all four datasets without fine-tuning or reranking. For example, it achieves 94.4% rank-1 accuracy on Market-1501 and 83.9% rank-1 accuracy on DukeMTMC-reID. The code and trained weights of our model will be released.", "title": "" }, { "docid": "fee96195e50e7418b5d63f8e6bd07907", "text": "Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives.", "title": "" }, { "docid": "704d729295cddd358eba5eefdf0bdee4", "text": "Remarkable advances in instrument technology, automation and computer science have greatly simplified many aspects of previously tedious tasks in laboratory diagnostics, creating a greater volume of routine work, and significantly improving the quality of results of laboratory testing. Following the development and successful implementation of high-quality analytical standards, analytical errors are no longer the main factor influencing the reliability and clinical utilization of laboratory diagnostics. Therefore, additional sources of variation in the entire laboratory testing process should become the focus for further and necessary quality improvements. Errors occurring within the extra-analytical phases are still the prevailing source of concern. Accordingly, lack of standardized procedures for sample collection, including patient preparation, specimen acquisition, handling and storage, account for up to 93% of the errors currently encountered within the entire diagnostic process. The profound awareness that complete elimination of laboratory testing errors is unrealistic, especially those relating to extra-analytical phases that are harder to control, highlights the importance of good laboratory practice and compliance with the new accreditation standards, which encompass the adoption of suitable strategies for error prevention, tracking and reduction, including process redesign, the use of extra-analytical specifications and improved communication among caregivers.", "title": "" }, { "docid": "e05b1b6e1ca160b06e36b784df30b312", "text": "The vision of the MDSD is an era of software engineering where modelling completely replaces programming i.e. the systems are entirely generated from high-level models, each one specifying a different view of the same system. The MDSD can be seen as the new generation of visual programming languages which provides methods and tools to streamline the process of software engineering. Productivity of the development process is significantly improved by the MDSD approach and it also increases the quality of the resulting software system. The MDSD is particularly suited for those software applications which require highly specialized technical knowledge due to the involvement of complex technologies and the large number of complex and unmanageable standards. In this paper, an overview of the MDSD is presented; the working styles and the main concepts are illustrated in detail.", "title": "" } ]
scidocsrr
7e16f8f448ed1e85391937d38a54cdd5
Deep Voice 2: Multi-Speaker Neural Text-to-Speech
[ { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" }, { "docid": "3c3980cb427c2630016f26f18cbd4ab9", "text": "MOS (mean opinion score) subjective quality studies are used to evaluate many signal processing methods. Since laboratory quality studies are time consuming and expensive, researchers often run small studies with less statistical significance or use objective measures which only approximate human perception. We propose a cost-effective and convenient measure called crowdMOS, obtained by having internet users participate in a MOS-like listening study. Workers listen and rate sentences at their leisure, using their own hardware, in an environment of their choice. Since these individuals cannot be supervised, we propose methods for detecting and discarding inaccurate scores. To automate crowdMOS testing, we offer a set of freely distributable, open-source tools for Amazon Mechanical Turk, a platform designed to facilitate crowdsourcing. These tools implement the MOS testing methodology described in this paper, providing researchers with a user-friendly means of performing subjective quality evaluations without the overhead associated with laboratory studies. Finally, we demonstrate the use of crowdMOS using data from the Blizzard text-to-speech competition, showing that it delivers accurate and repeatable results.", "title": "" }, { "docid": "8b17832466056900217ae17bbd81d061", "text": "A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module. Building these components often requires extensive domain expertise and may contain brittle design choices. In this paper, we present Tacotron, an end-to-end generative text-to-speech model that synthesizes speech directly from characters. Given <text, audio> pairs, the model can be trained completely from scratch with random initialization. We present several key techniques to make the sequence-tosequence framework perform well for this challenging task. Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English, outperforming a production parametric system in terms of naturalness. In addition, since Tacotron generates speech at the frame level, it’s substantially faster than sample-level autoregressive methods.", "title": "" } ]
[ { "docid": "517cb20e47d3b92d12c6fb86b22c3a19", "text": "The symmetric traveling salesman problem (TSP) is one of the best-known problems of combinatorial optimisation, very easy to explain and visualise, yet with a semblance of real-world applicability. Given a set of points, the cost of moving between each two in either direction, and a constant k, the task in the TSP is to decide, whether it is possible to visit all points, each one exactly once, and to return back to the point of departure, at a total cost of no more than k. The latest book by Applegate, Bixby, Chvátal, and Cook provides an excellent survey of methods that kick-started this \" engine of discovery in applied mathematics \" (invoked on pp. In more than 600 pages, the authors present a survey of methods used in their present-best TSP solver Concorde, almost to the exclusion of any other content. Chapters 1–4 describe the TSP and Chapters 5–6 provide a brief introduction to solving the TSP by using the branch and cut method. At the heart of the book are then Chapters 7–11, which survey various classes of cuts, in some cases first proposed by the authors themselves. Chapter 7 surveys cuts from blossoms and blocks, Chapter 8 presents cuts from combs and consecutive ones, and Chapter 9 introduces cuts from dominoes. Chapters 11 and 12 then describe in yet more detail separation and metamorphoses of strong valid inequalities. Other variants of the problem, such as the asymmetric TSP, and other solution approaches, including metaheuristics and approximation algorithms, are mentioned only in the passing. They are, however, well-covered elsewhere (Gutin & Punnen, 2002), and the seemingly narrow focus consequently enables the authors to provide an outstandingly in-depth treatment. The treatment especially benefits from authors' extensive experience with implementation of solvers for problems of combinatorial optimisation. In many textbooks on combinatorial optimisation, primal heuristics are mentioned only in passing and cuts are presented in the very mathematical style of definition – proof of validity – proof of dimensionality. Not here. Chapter 6-11 suggest separation routines, exact or heuristic, alongside the description of strong valid inequalities, Chapter 12 is devoted to management of cuts and instances of linear programming, Chapter 13 describes pricing routines for column generation, and last but not least, Chapter 15 is devoted to primal (tour-finding) heuristics. \" Implementation details \" , such as the choice of suitable data structures and trade-offs between heuristic and exact separation, are …", "title": "" }, { "docid": "e0b1e38b08b6fb098808585a5a3c8753", "text": "The decade since the Human Genome Project ended has witnessed a remarkable sequencing technology explosion that has permitted a multitude of questions about the genome to be asked and answered, at unprecedented speed and resolution. Here I present examples of how the resulting information has both enhanced our knowledge and expanded the impact of the genome on biomedical research. New sequencing technologies have also introduced exciting new areas of biological endeavour. The continuing upward trajectory of sequencing technology development is enabling clinical applications that are aimed at improving medical diagnosis and treatment.", "title": "" }, { "docid": "bca9cc92c80d655a9e86b0f916ae4665", "text": "Reciprocal Rank Fusion (RRF), a simple method for combining the document rankings from multiple IR systems, consistently yields better results than any individual system, and better results than the standard method Condorcet Fuse. This result is demonstrated by using RRF to combine the results of several TREC experiments, and to build a meta-learner that ranks the LETOR 3 dataset better than any previously reported method", "title": "" }, { "docid": "cd9e90ba83156a2c092d68022c4227c9", "text": "The difficulty of integer factorization is fundamental to modern cryptographic security using RSA encryption and signatures. Although a 512-bit RSA modulus was first factored in 1999, 512-bit RSA remains surprisingly common in practice across many cryptographic protocols. Popular understanding of the difficulty of 512-bit factorization does not seem to have kept pace with developments in computing power. In this paper, we optimize the CADO-NFS and Msieve implementations of the number field sieve for use on the Amazon Elastic Compute Cloud platform, allowing a non-expert to factor 512-bit RSA public keys in under four hours for $75. We go on to survey the RSA key sizes used in popular protocols, finding hundreds or thousands of deployed 512-bit RSA keys in DNSSEC, HTTPS, IMAP, POP3, SMTP, DKIM, SSH, and PGP.", "title": "" }, { "docid": "4334f0fffe71b3250ac8ee78f326f04d", "text": "The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.", "title": "" }, { "docid": "90563706ada80e880b7fcf25489f9b27", "text": "We describe the large vocabulary automatic speech recognition system developed for Modern Standard Arabic by the SRI/Nightingale team, and used for the 2007 GALE evaluation as part of the speech translation system. We show how system performance is affected by different development choices, ranging from text processing and lexicon to decoding system architecture design. Word error rate results are reported on broadcast news and conversational data from the GALE development and evaluation test sets.", "title": "" }, { "docid": "b672aa84da41b3887664562cc4334d56", "text": "Wearable health monitoring systems have gained considerable interest in recent years owing to their tremendous promise for personal portable health watching and remote medical practices. The sensors with excellent flexibility and stretchability are crucial components that can provide health monitoring systems with the capability of continuously tracking physiological signals of human body without conspicuous uncomfortableness and invasiveness. The signals acquired by these sensors, such as body motion, heart rate, breath, skin temperature and metabolism parameter, are closely associated with personal health conditions. This review attempts to summarize the recent progress in flexible and stretchable sensors, concerning the detected health indicators, sensing mechanisms, functional materials, fabrication strategies, basic and desired features. The potential challenges and future perspectives of wearable health monitoring system are also briefly discussed.", "title": "" }, { "docid": "a4e2dc2f197b57adf6d81689dd689d72", "text": "Word embeddings – distributed representations of words – in deep learning are beneficial for many tasks in natural language processing (NLP). However, different embedding sets vary greatly in quality and characteristics of the captured semantics. Instead of relying on a more advanced algorithm for embedding learning, this paper proposes an ensemble approach of combining different public embedding sets with the aim of learning meta-embeddings. Experiments on word similarity and analogy tasks and on part-of-speech tagging show better performance of metaembeddings compared to individual embedding sets. One advantage of meta-embeddings is the increased vocabulary coverage. We will release our meta-embeddings publicly.", "title": "" }, { "docid": "21a68f76ed6d18431f446398674e4b4e", "text": "With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.", "title": "" }, { "docid": "0fd00242f8f881594295165a68df313b", "text": "Following in the footsteps of the model of scientific communication, which has recently gone through a metamorphosis (from the Gutenberg galaxy to the Web galaxy), a change in the model and methods of scientific evaluation is also taking place. A set of new scientific tools are now providing a variety of indicators which measure all actions and interactions among scientists in the digital space, making new aspects of scientific communication emerge. In this work we present a method for ―capturing‖ the structure of an entire scientific community (the Bibliometrics, Scientometrics, Informetrics, Webometrics, and Altmetrics community) and the main agents that are part of it (scientists, documents, and sources) through the lens of Google Scholar Citations (GSC). Additionally, we compare these author ―portraits‖ to the ones offered by other profile or social platforms currently used by academics (ResearcherID, ResearchGate, Mendeley, and Twitter), in order to test their degree of use, completeness, reliability, and the validity of the information they provide. A sample of 814 authors (researchers in Bibliometrics with a public profile created in GSC) was subsequently searched in the other platforms, collecting the main indicators computed by each of them. The data collection was carried out on September, 2015. The Spearman correlation (α= 0.05) was applied to these indicators (a total of 31), and a Principal Component Analysis was carried out in order to reveal the relationships among metrics and platforms as well as the possible existence of metric clusters. We found that it is feasible to depict an accurate representation of the current state of the Bibliometrics community using data from GSC (the most influential authors, documents, journals, and publishers). Regarding the number of authors found in each platform, GSC takes the first place (814 authors), followed at a distance by ResearchGate (543), which is currently growing at a vertiginous speed. The number of Mendeley profiles is high, although 17.1% of them are basically empty. ResearcherID is also affected by this issue (34.45% of the profiles are empty), as is Twitter (47% of the Twitter accounts have published less than 100 tweets). Only 11% of our sample (93 authors) have created a profile in all the platforms analyzed in this study. From the PCA, we found two kinds of impact on the Web: first, all metrics related to academic impact. This first group can further be divided into usage metrics (views and downloads) and citation metrics. Second, all metrics related to connectivity and popularity (followers). ResearchGate indicators, as well as Mendeley readers, present a high correlation to all the indicators from GSC, but only a moderate correlation to the indicators in ResearcherID. Twitter indicators achieve only low correlations to the rest of the indicators, the highest of these being to GSC (0.42-0.46), and to Mendeley (0.41-0.46). Lastly, we present a taxonomy of all the errors that may affect the reliability of the data contained in each of these platforms, with a special emphasis in GSC, since it has been our main source of data. These errors alert us to the danger of blindly using any of these platforms for the assessment of individuals, without verifying the veracity and exhaustiveness of the data. In addition to this working paper, we also have made available a website where all the data obtained for each author and the results of the analysis of the most cited documents can be found: Scholar Mirrors.", "title": "" }, { "docid": "da120567a99284563ddb9544d2d92318", "text": "As more applications are built on top of blockchain and public ledger, different approaches are developed to improve the performance of blockchain construction. Recently Intel proposed a new concept of proof-of-elapsed-time (PoET), which leverages trusted computing to enforce random waiting times for block construction. However, trusted computing component may not be perfect and 100% reliable. It is not clear, to what extent, blockchain systems based on PoET can tolerate failures of trusted computing component. The current design of PoET lacks rigorous security analysis and a theoretical foundation for assessing its strength against such attacks. To fulfill this gap, we develop a theoretical framework for evaluating a PoET based blockchain system, and show that the current design is vulnerable in the sense that adversary can jeopardize the blockchain system by only compromising Θ(log log n/ log n) fraction of the participating nodes, which is very small when n is relatively large. Based on our theoretical analysis, we also propose methods to mitigate these vulnerabilities.", "title": "" }, { "docid": "c46728b89e6cfca7422f4f0e1036ddab", "text": "This paper presents our named entity recognition system for Vietnamese text using labeled propagation. In here we propose: (i) a method of choosing noun phrases as the named entity candidates; (ii) a method to measure the word similarity; and (iii) a method of decreasing the effect of high frequency labels in labeled documents. Experimental results show that our labeled propagate method achieves higher accuracy than the old one [12]. In addition, when the number of the labeled data is small, its accuracy is higher than when using conditional random fields.", "title": "" }, { "docid": "56d0609fe4e68abbce27124dd5291033", "text": "Existing works indicate that the absence of explicit discourse connectives makes it difficult to recognize implicit discourse relations. In this paper we attempt to overcome this difficulty for implicit relation recognition by automatically inserting discourse connectives between arguments with the use of a language model. Then we propose two algorithms to leverage the information of these predicted connectives. One is to use these predicted implicit connectives as additional features in a supervised model. The other is to perform implicit relation recognition based only on these predicted connectives. Results on Penn Discourse Treebank 2.0 show that predicted discourse connectives help implicit relation recognition and the first algorithm can achieve an absolute average f-score improvement of 3% over a state of the art baseline system.", "title": "" }, { "docid": "2424f3c4eeeff7749850c32750885744", "text": "The boom for robotics technologies in recent years has also empowered a new generation of robotics software. The Robot Operating System (ROS) is one of the most popular frameworks for robotics researchers and makers which is moving towards commercial and industrial use. Security-wise however, ROS is vulnerable to attacks. It is rather easy to inject or eavesdrop data in a ROS application. This opens many different ways to attack a ROS application resulting in data loss, monetary damage or even physical injury. In this paper we present a secure communication channel enabling ROS-nodes to communicate with authenticity and confidentiality. We secure ROS on a peer-to-peer basis in the direct interaction between publishers and subscribers. We describe the implementation changes we have made to the ROS core and assess the overhead introduced by the new security functions.", "title": "" }, { "docid": "e4e7b1b9ec8f0688d2d10206be59cd99", "text": "Recognizing TimeML events and identifying their attributes, are important tasks in natural language processing (NLP). Several NLP applications like question answering, information retrieval, summarization, and temporal information extraction need to have some knowledge about events of the input documents. Existing methods developed for this task are restricted to limited number of languages, and for many other languages including Persian, there has not been any effort yet. In this paper, we introduce two different approaches for automatic event recognition and classification in Persian. For this purpose, a corpus of events has been built based on a specific version of ISO-TimeML for Persian. We present the specification of this corpus together with the results of applying mentioned approaches to the corpus. Considering these methods are the first effort towards Persian event extraction, the results are comparable to that of successful methods in English. TITLE AND ABSTRACT IN PERSIAN اھداديور جارختسا زا یسراف نوتم فيرعت رب انب ISO-TimeML نتفاي اھداديور یگژيو و اھنآ یاھ ساسا رب TimeML زا یکي لئاسم هزوح رد مھم یعيبط یاھ نابز شزادرپ ی تسا . نابز شزادرپ یاھدربراک زا یرايسب هناماس دننام یعيبط یاھ و یزاس هص2خ ،تاع2طا جارختسا ،خساپ و شسرپ یاھ ات دنراد زاين ینامز تاع2طا جارختسا هرابرد یشناد یاھداديور رد دوجوم نوتم یدورو شور .دنشاب هتشاد هک یياھ نيا دروم رد نونکات هدش داجيا هلئسم نابز دنچ هب دودحم ، صاخ نابز زا یرايسب رد و تسا اھ هلمج زا ،یسراف نابز یراک نونکات هدشن ماجنا هطبار نيا رد یسراف نابز رد اھداديور جارختسا یارب فلتخم شور ود ام ،هلاقم نيا رد .تسا یم هئارا .ميھد یارب هرکيپ ،راک نيا اب قباطم یا ISO-TimeML ، سن هتبلا هخ دش هتخاس ،نآ یسراف صاخ ی ام . ناشن ار ،نآ یور رب لصاح جياتن و هرکيپ نيا تاصخشم یم ميھد شور جياتن . هئارا یاھ هدش هلاقم نيا رد ناونع هب ، شور نيلوا هدايپ یاھ اب ،یسراف نابز یور رب هدش یزاس .تسا هسياقم لباق یسيلگنا نابز رد قفوم یاھ شور", "title": "" }, { "docid": "a7cc577ae2a09a5ff18333b7bfb47001", "text": "Metacercariae of an unidentified species of Apophallus Lühe, 1909 are associated with overwinter mortality in coho salmon, Oncorhynchus kisutch (Walbaum, 1792), in the West Fork Smith River, Oregon. We infected chicks with these metacercariae in order to identify the species. The average size of adult worms was 197 × 57 μm, which was 2 to 11 times smaller than other described Apophallus species. Eggs were also smaller, but larger in proportion to body size, than in other species of Apophallus. Based on these morphological differences, we describe Apophallus microsoma n. sp. In addition, sequences from the cytochrome c oxidase 1 gene from Apophallus sp. cercariae collected in the study area, which are likely conspecific with experimentally cultivated A. microsoma, differ by >12% from those we obtained from Apophallus donicus ( Skrjabin and Lindtrop, 1919 ) and from Apophallus brevis Ransom, 1920 . The taxonomy and pathology of Apophallus species is reviewed.", "title": "" }, { "docid": "01ff7e55830977622482ab018acd2cfe", "text": "Dictionary learning has been widely used in many image processing tasks. In most of these methods, the number of basis vectors is either set by experience or coarsely evaluated empirically. In this paper, we propose a new scale adaptive dictionary learning framework, which jointly estimates suitable scales and corresponding atoms in an adaptive fashion according to the training data, without the need of prior information. We design an atom counting function and develop a reliable numerical scheme to solve the challenging optimization problem. Extensive experiments on texture and video data sets demonstrate quantitatively and visually that our method can estimate the scale, without damaging the sparse reconstruction ability.", "title": "" }, { "docid": "e0c448dffdffc6e9a83290ddf0bad2ae", "text": "Along with the extensive use of ontologies as a well-established means for knowledge representation, there is a pressing need for methods that can transform ontology information into knowledge stored in a Graph Database (GDB) which is considered human-like thinking in terms of objects and their relations. In this paper, we describe a two-layer knowledge graph database: a concept layer and an instance layer. The concept layer is the resulting graph representation transferred from an ontology representation. The instance layer is the instance data associated with concept nodes. In this research, we apply the two-layer approach to a retail business transaction data for business information query and reasoning. The two-layer structure is implemented in Neo4j GDB platform and information query and recommendation is implemented with a Jess reasoning engine. The query and recommendation results are represented and visualized in knowledge graph structures. The performance of the system is evaluated in terms of the time efficiency of answering queries of retail data using the GDB and the novelty of recommendations.", "title": "" }, { "docid": "44df8d7a456395d0558ac7e6cd124120", "text": "Scientific research on the effects of essential oils on human behavior lags behind the promises made by popular aromatherapy. Nearly all aspects of human behavior are closely linked to processes of attention, the basic level being that of alertness, which ranges from sleep to wakefulness. In our study we measured the influence of essential oils and components of essential oils [peppermint, jasmine, ylang-ylang, 1,8-cineole (in two different dosages) and menthol] on this core attentional function, which can be experimentally defined as speed of information processing. Substances were administered by inhalation; levels of alertness were assessed by measuring motor and reaction times in a reaction time paradigm. The performances of the six experimental groups receiving substances (n = 20 in four groups, n = 30 in two groups) were compared with those of corresponding control groups receiving water. Between-group analysis, i.e. comparisons between experimental groups and their respective control groups, mainly did not reach statistical significance. However, within-group analysis showed complex correlations between subjective evaluations of substances and objective performance, indicating that effects of essentials oils or their components on basic forms of attentional behavior are mainly psychological.", "title": "" }, { "docid": "557ae957d0d13c8560a3ea83209049e5", "text": "The Segway Robotic Mobility Platform (RMP) is a new mobile robotic platform based on the self-balancing Segway Human Transporter (HT). The Segway RMP is faster, cheaper, and more agile than existing comparable platforms. It is also rugged, has a small footprint, a zero turning radius, and yet can carry a greater payload. The new geometry of the platform presents researchers with an opportunity to examine novel topics, including people-height sensing and actuation modalities. This paper describes the history and development of the platform, its characteristics, and a summary of current research projects involving the platform at various institutions across the United States.", "title": "" } ]
scidocsrr
6bf579127f0aaef64f9de22cc675db7c
Text analytics of unstructured textual data: A study on military peacekeeping document using R text mining package
[ { "docid": "464b66e2e643096bd344bea8026f4780", "text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.", "title": "" }, { "docid": "98e025d04aaf1ba394d7c8ac537b40c9", "text": "The information age is characterized by a rapid growth in the amount of information available in electronic media. Traditional data handling methods are not adequate to cope with this information flood. Knowledge Discovery in Databases (KDD) is a new paradigm that focuses on computerized exploration of large amounts of data and on discovery of relevant and interesting patterns within them. While most work on KDD is concerned with structured databases, it is clear that this paradigm is required for handling the huge amount of information that is available only in unstructured textual form. To apply traditional KDD on texts it is necessary to impose some structure on the data that would be rich enough to allow for interesting KDD operations. On the other hand, we have to consider the severe limitations of current text processing technology and define rather simple structures that can be extracted from texts fairly automatically and in a reasonable cost. We propose using a text categorization paradigm to annotate text articles with meaningful concepts that are organized in hierarchical structure. We suggest that this relatively simple annotation is rich enough to provide the basis for a KDD framework, enabling data summarization, exploration of interesting patterns, and trend analysis. This research combines the KDD and text categorization paradigms and suggests advances to the state of the art in both areas.", "title": "" }, { "docid": "bb55510f034058a8aee61ce55364d004", "text": "Over the past few years, we have been trying to build an end-to-end system at Wisconsin to manage unstructured data, using extraction, integration, and user interaction. This paper describes the key information extraction (IE) challenges that we have run into, and sketches our solutions. We discuss in particular developing a declarative IE language, optimizing for this language, generating IE provenance, incorporating user feedback into the IE process, developing a novel wiki-based user interface for feedback, best-effort IE, pushing IE into RDBMSs, and more. Our work suggests that IE in managing unstructured data can open up many interesting research challenges, and that these challenges can greatly benefit from the wealth of work on managing structured data that has been carried out by the database community.", "title": "" } ]
[ { "docid": "a61441a2e0a6100e1b91ea08ff312509", "text": "We discuss the evolution and state-of-the-art of the use of Building Information Modelling (BIM) in the field of culture heritage documentation. BIM is a hot theme involving different characteristics including principles, technology, even privacy rights for the cultural heritage objects. Modern documentation needs identified the potential of BIM in the recent years. Many architects, archaeologists, conservationists, engineers regard BIM as a disruptive force, changing the way professionals can document and manage a cultural heritage structure. The latest years, there are many developments in the BIM field while the developed technology and methods challenged the cultural heritage community in the documentation framework. In this review article, following a brief historic background for the BIM, we review the recent developments focusing in the cultural heritage documentation perspective.", "title": "" }, { "docid": "d679e7cbef9ac3cfbea38b92891fc1a0", "text": "Personal health records (PHR) have enormous potential to improve both documentation of health information and patient care. The adoption of these systems, however, has been relatively slow. In this work, we used a multi-method approach to evaluate PHR systems. We interviewed potential end users---clinicians and patients---and conducted evaluations with patients and caregivers as well as a heuristic evaluation with HCI experts. In these studies, we focused on three PHR systems: Google Health, Microsoft HealthVault, and WorldMedCard. Our results demonstrate that both usability concerns and socio-cultural influences are barriers to PHR adoption and use. In this paper, we present those results as well as reflect on how both PHR designers and developers might address these issues now and throughout the design cycle.", "title": "" }, { "docid": "a8af4f122556e7d0222dd8750b9a09c6", "text": "The purpose of this study is to explore the effect of applying game-based learning (GBL) in Chinese language learning for elementary school students in Taiwan. This study used the \"Millionaire Language Game\" was the research tool in the team competitions in Chinese language instruction. It then conducted survey on the students' attitudes and feedback regarding their usage experience, and interviews with teachers and students. The main conclusions of this study are: 1) the application of GBL on Chinese language instruction has a positive influence on the learning attitude of learners, 2) the influence of the application of GBL on Chinese language instruction on the learning attitude of learners is not restricted by gender, 3) the application of GBL on Chinese language instruction influences the learning attitude of learners transcending usage experiences, 4) for teachers, GBL is beneficial to Chinese language instruction.", "title": "" }, { "docid": "b22136f00469589c984081742c4605d3", "text": "Convolutional neural network (CNN), which comprises one or more convolutional and pooling layers followed by one or more fully-connected layers, has gained popularity due to its ability to learn fruitful representations from images or speeches, capturing local dependency and slight-distortion invariance. CNN has recently been applied to the problem of activity recognition, where 1D kernels are applied to capture local dependency over time in a series of observations measured at inertial sensors (3-axis accelerometers and gyroscopes). In this paper we present a multi-modal CNN where we use 2D kernels in both convolutional and pooling layers, to capture local dependency over time as well as spatial dependency over sensors. Experiments on benchmark datasets demonstrate the high performance of our multi-modal CNN, compared to several state of the art methods.", "title": "" }, { "docid": "2aebb64ffa5602d2733472a89731edab", "text": "We propose a novel Channel Quality Indicator (CQI) calculation scheme in LTE\\LTE-A systems. SNR-CQI mapping scheme, as well as SINR-CQI mapping scheme, is extensively studied in previous works. However, these kinds of CQI mapping schemes always provide imprecise results in fading channels, since all of them ignore the channel quality degeneration caused by other factors. In this paper, we further improve CQI calculation by considering the impact of multipath delay spread. The calculation scheme is presented in details. In order to guarantee the converge of CQI calculation, a Block Error Ratio (BLER) based converge algorithm is proposed. Simulations show that our calculation scheme can select CQIs more precisely, and achieve better throughput performance than traditional schemes. Furthermore, our scheme converges much faster than previous schemes.", "title": "" }, { "docid": "6bc94f9b5eb90ba679964cf2a7df4de4", "text": "New high-frequency data collection technologies and machine learning analysis techniques could offer new insights into learning, especially in tasks in which students have ample space to generate unique, personalized artifacts, such as a computer program, a robot, or a solution to an engineering challenge. To date most of the work on learning analytics and educational data mining has focused on online courses or cognitive tutors, in which the tasks are more structured and the entirety of interaction happens in front of a computer. In this paper, I argue that multimodal learning analytics could offer new insights into students' learning trajectories, and present several examples of this work and its educational application.", "title": "" }, { "docid": "bf31bf712d978d16f2b4d2768f8e7354", "text": "Design/methodology/approach: Both qualitative comparisons of functionality and quantitative comparisons of false positives and false negatives are made for seven different scanners. The quantitative assessment includes data from both authenticated and unauthenticated scans. Experiments were conducted on a computer network of 28 hosts with various operating systems, services and vulnerabilities. This network was set up by a team of security researchers and professionals.", "title": "" }, { "docid": "b231f2c6b19d5c38b8aa99ec1b1e43da", "text": "Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals’ degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores (“indirect reciprocity”), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. “tit-for-tat”) as well as indirect reciprocity (helping strangers in order to increase one’s reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level.", "title": "" }, { "docid": "ae2d295f84026ea83c74fa5e1b650385", "text": "We consider learning to generalize and extrapolate with limited data to harder compositional problems than a learner has previously seen. We take steps toward this challenge by presenting a characterization, algorithm, and implementation of a learner that programs itself automatically to reflect the structure of the problem it faces. Our key ideas are (1) transforming representations with modular units of computation is a solution for decomposing problems in a way that reflects their subproblem structure; (2) learning the structure of a computation can be formulated as a sequential decision-making problem. Experiments on solving various multilingual arithmetic problems demonstrate that our method generalizes out of distribution to unseen problem classes and extrapolates to harder versions of the same problem. Our paper provides the first element of a framework for learning general-purpose, compositional and recursive programs that design themselves.", "title": "" }, { "docid": "c7cfc79579704027bf28fc7197496b8c", "text": "There is a growing trend nowadays for patients to seek the least invasive treatments possible with less risk of complications and downtime to correct rhytides and ptosis characteristic of aging. Nonsurgical face and neck rejuvenation has been attempted with various types of interventions. Suture suspension of the face, although not a new idea, has gained prominence with the advent of the so called \"lunch-time\" face-lift. Although some have embraced this technique, many more express doubts about its safety and efficacy limiting its widespread adoption. The present review aims to evaluate several clinical parameters pertaining to thread suspensions such as longevity of results of various types of polypropylene barbed sutures, their clinical efficacy and safety, and the risk of serious adverse events associated with such sutures. Early results of barbed suture suspension remain inconclusive. Adverse events do occur though mostly minor, self-limited, and of short duration. Less clear are the data on the extent of the peak correction and the longevity of effect, and the long-term effects of the sutures themselves. The popularity of barbed suture lifting has waned for the time being. Certainly, it should not be presented as an alternative to a face-lift.", "title": "" }, { "docid": "a3386199b44e3164fafe8a8ae096b130", "text": "Diehl Aerospace GmbH (DAs) is currently involved in national German Research & Technology (R&T) projects (e.g. SYSTAVIO, SESAM) and in European R&T projects like ASHLEY to extend and to improve the Integrated Modular Avionics (IMA) technology. Diehl Aerospace is investing to expand its current IMA technology to enable further integration of systems including hardware modules, associated software, tools and processes while increasing the level of standardization. An additional objective is to integrate more systems on a common computing platform which uses the same toolchain, processes and integration experiences. New IMA components enable integration of high integrity fast loop system applications such as control applications. Distributed architectures which provide new types of interfaces allow integration of secondary power distribution systems along with other IMA functions. Cross A/C type usage is also a future emphasis to increase standardization and decrease development and operating costs as well as improvements on time to market and affordability of systems.", "title": "" }, { "docid": "ac0a6e663caa3cb8cdcb1a144561e624", "text": "A two-stage process is performed by human operator for cleaning windows. The first being the application of cleaning fluid, which is usually achieved by using a wetted applicator. The aim of this task being to cover the whole window area in the shortest possible time. This depends on two parameters: the size of the applicator and the path which the applicator travels without significantly overlapping previously wetted area. The second is the removal of cleaning fluid by a squeegee blade without spillage on to other areas of the facade or previously cleaned areas of glass. This is particularly difficult for example if the window is located on the roof of a building and cleaning is performed from inside by the human window cleaner.", "title": "" }, { "docid": "718cf9a405a81b9a43279a1d02f5e516", "text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.", "title": "" }, { "docid": "7560af7ed6d3a2ca48c7be047e90ac47", "text": "In the domain of computer games, research into the interaction between player and game has centred on 'enjoyment', often drawing in particular on optimal experience research and Csikszentmihalyi's 'Flow theory'. Flow is a well-established construct for examining experience in any setting and its application to game-play is intuitive. Nevertheless, it's not immediately obvious how to translate between the flow construct and an operative description of game-play. Previous research has attempted this translation through analogy. In this article we propose a practical, integrated approach for analysis of the mechanics and aesthetics of game-play, which helps develop deeper insights into the capacity for flow within games.\n The relationship between player and game, characterized by learning and enjoyment, is central to our analysis. We begin by framing that relationship within Cowley's user-system-experience (USE) model, and expand this into an information systems framework, which enables a practical mapping of flow onto game-play. We believe this approach enhances our understanding of a player's interaction with a game and provides useful insights for games' researchers seeking to devise mechanisms to adapt game-play to individual players.", "title": "" }, { "docid": "be70a14152656eb886c8a28e7e0dd613", "text": "OBJECTIVES\nTranscutaneous electrical nerve stimulation (TENS) is an analgesic current that is used in many acute and chronic painful states. The aim of this study was to investigate central pain modulation by low-frequency TENS.\n\n\nMETHODS\nTwenty patients diagnosed with subacromial impingement syndrome of the shoulder were enrolled in the study. Patients were randomized into 2 groups: low-frequency TENS and sham TENS. Painful stimuli were delivered during which functional magnetic resonance imaging scans were performed, both before and after treatment. Ten central regions of interest that were reported to have a role in pain perception were chosen and analyzed bilaterally on functional magnetic resonance images. Perceived pain intensity during painful stimuli was evaluated using visual analog scale (VAS).\n\n\nRESULTS\nIn the low-frequency TENS group, there was a statistically significant decrease in the perceived pain intensity and pain-specific activation of the contralateral primary sensory cortex, bilateral caudal anterior cingulate cortex, and of the ipsilateral supplementary motor area. There was a statistically significant correlation between the change of VAS value and the change of activity in the contralateral thalamus, prefrontal cortex, and the ipsilateral posterior parietal cortex. In the sham TENS group, there was no significant change in VAS value and activity of regions of interest.\n\n\nDISCUSSION\nWe suggest that a 1-session low-frequency TENS may induce analgesic effect through modulation of discriminative, affective, and motor aspects of central pain perception.", "title": "" }, { "docid": "fa22819c73c9f9cd2d0ee243a7450e76", "text": "This dissertation describes a simulated autonomous car capable of driving on urbanstyle roads. The system is built around TORCS, an open source racing car simulator. Two real-time solutions are implemented; a reactive prototype using a neural network and a more complex deliberative approach using a sense, plan, act architecture. The deliberative system uses vision data fused with simulated laser range data to reliably detect road markings. The detected road markings are then used to plan a parabolic path and compute a safe speed for the vehicle. The vehicle uses a simulated global positioning/inertial measurement sensor to guide it along the desired path with the throttle, brakes, and steering being controlled using proportional controllers. The vehicle is able to reliably navigate the test track maintaining a safe road position at speeds of up to 40km/h.", "title": "" }, { "docid": "29479201c12e99eb9802dd05cff60c36", "text": "Exposures to air pollution in the form of particulate matter (PM) can result in excess production of reactive oxygen species (ROS) in the respiratory system, potentially causing both localized cellular injury and triggering a systemic inflammatory response. PM-induced inflammation in the lung is modulated in large part by alveolar macrophages and their biochemical signaling, including production of inflammatory cytokines, the primary mechanism via which inflammation is initiated and sustained. We developed a robust, relevant, and flexible method employing a rat alveolar macrophage cell line (NR8383) which can be applied to routine samples of PM from air quality monitoring sites to gain insight into the drivers of PM toxicity that lead to oxidative stress and inflammation. Method performance was characterized using extracts of ambient and vehicular engine exhaust PM samples. Our results indicate that the reproducibility and the sensitivity of the method are satisfactory and comparisons between PM samples can be made with good precision. The average relative percent difference for all genes detected during 10 different exposures was 17.1%. Our analysis demonstrated that 71% of genes had an average signal to noise ratio (SNR) ≥ 3. Our time course study suggests that 4 h may be an optimal in vitro exposure time for observing short-term effects of PM and capturing the initial steps of inflammatory signaling. The 4 h exposure resulted in the detection of 57 genes (out of 84 total), of which 86% had altered expression. Similarities and conserved gene signaling regulation among the PM samples were demonstrated through hierarchical clustering and other analyses. Overlying the core congruent patterns were differentially regulated genes that resulted in distinct sample-specific gene expression \"fingerprints.\" Consistent upregulation of Il1f5 and downregulation of Ccr7 was observed across all samples, while TNFα was upregulated in half of the samples and downregulated in the other half. Overall, this PM-induced cytokine expression assay could be effectively integrated into health studies and air quality monitoring programs to better understand relationships between specific PM components, oxidative stress activity and inflammatory signaling potential.", "title": "" }, { "docid": "e90e2c7ba1ae851dd02be27d9342748c", "text": "This study tested an updated cognitive-behavioral model of generalized problematic Internet use and reports results of a confirmatory analysis of the Generalized Problematic Internet Use Scale 2 (GPIUS2). Overall, the results indicated that a preference for online social interaction and use of the Internet for mood regulation, predict deficient self-regulation of Internet use (i.e., compulsive Internet use and a cognitive preoccupation with the Internet). In turn, deficient self-regulation was a significant predictor of the extent to which one’s Internet use led to negative outcomes. Results indicated the model fit the data well and variables in the model accounted for 27% of the variance in mood regulation scores, 65% of variance in participants’ deficient self-regulation scores, and 61% of variance in the negative outcome scores. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a239f42e7212bd0967d417338106c6f6", "text": "The aim of this article is to present a new technique for augmentation of deficient alveolar ridges and/or correction of osseous defects around dental implants. Current knowledge regarding bone augmentation for treatment of osseous defects prior to and in combination with dental implant placement is critically appraised. The \"sandwich\" bone augmentation technique is demonstrated step by step. Five pilot cases with implant dehiscence defects averaging 10.5 mm were treated with the technique. At 6 months, the sites were uncovered, and complete defect fill was noted in all cases. Results from this pilot case study indicated that the sandwich bone augmentation technique appears to enhance the outcomes of bone augmentation by using the positive properties of each applied material (autograft, DFDBA, hydroxyapatite, and collagen membrane). Future clinical trials for comparison of this approach with other bone augmentation techniques and histologic evaluation of the outcomes are needed to validate these findings.", "title": "" }, { "docid": "2df7d9426c4481ea31c4e8a9d2e1f552", "text": "A low-power 1.2 V CMOS chopper-stabilized analog front-end integrated circuit (IC) for glucose monitoring is presented in this letter. The operating parameters of the IC, including reference voltage in potentiostat, current offset, output gain, and offset, are fully programmable. The IC is fabricated using 0.13-μm CMOS technology, has an active area of 1.4 mm × 4.3 mm, and its power consumption is 30.2 μW. Furthermore, a chopper-stabilized open-loop transimpedance amplifier is proposed for low-power and low-noise implementation. The integrated input-referred current noise is 260 pArms with a bandwidth of 100 Hz.", "title": "" } ]
scidocsrr
afb5d0fca98c26f92bfd58897fc8ea41
PIkit: A New Kernel-Independent Processor-Interconnect Rootkit
[ { "docid": "f9b6662dc19c47892bb7b95c5b7dc181", "text": "The ability to update firmware is a feature that is found in nearly all modern embedded systems. We demonstrate how this feature can be exploited to allow attackers to inject malicious firmware modifications into vulnerable embedded devices. We discuss techniques for exploiting such vulnerable functionality and the implementation of a proof of concept printer malware capable of network reconnaissance, data exfiltration and propagation to general purpose computers and other embedded device types. We present a case study of the HP-RFU (Remote Firmware Update) LaserJet printer firmware modification vulnerability, which allows arbitrary injection of malware into the printer’s firmware via standard printed documents. We show vulnerable population data gathered by continuously tracking all publicly accessible printers discovered through an exhaustive scan of IPv4 space. To show that firmware update signing is not the panacea of embedded defense, we present an analysis of known vulnerabilities found in third-party libraries in 373 LaserJet firmware images. Prior research has shown that the design flaws and vulnerabilities presented in this paper are found in other modern embedded systems. Thus, the exploitation techniques presented in this paper can be generalized to compromise other embedded systems. Keywords-Embedded system exploitation; Firmware modification attack; Embedded system rootkit; HP-RFU vulnerability.", "title": "" }, { "docid": "14dd650afb3dae58ffb1a798e065825a", "text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.", "title": "" } ]
[ { "docid": "cd224f035982a669dcd8eb0c086a1be0", "text": "In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.", "title": "" }, { "docid": "c0a93ce6a5d557e82fa5632b30a3addc", "text": "Receiver Operating Characteristics (ROC) graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs have been used in cost-sensitive learning because of the ease with which class skew and error cost information can be applied to them to yield cost-sensitive decisions. However, they have been criticized because of their inability to handle instance-varying costs; that is, domains in which error costs vary from one instance to another. This paper presents and investigates a technique for adapting ROC graphs for use with domains in which misclassification costs vary within the instance population.", "title": "" }, { "docid": "f565a815207932f6603b19fc57b02d4c", "text": "This study was aimed at extending the use of assistive technology (i.e., photocells, interface and personal computer) to support choice strategies by three girls with Rett syndrome and severe to profound developmental disabilities. A second purpose of the study was to reduce stereotypic behaviors exhibited by the participants involved (i.e., body rocking, hand washing and hand mouthing). Finally, a third goal of the study was to monitor the effects of such program on the participants' indices of happiness. The study was carried out according to a multiple probe design across responses for each participant. Results showed that the three girls increased the adaptive responses and decreased the stereotyped behaviors during intervention phases compared to baseline. Moreover, during intervention phases, the indices of happiness augmented for each girl as well. Clinical, psychological and rehabilitative implications of the findings are discussed.", "title": "" }, { "docid": "ba0c9cfd4b9fdb9adf81becd77aa88dc", "text": "Two sets of experiments were carried out to examine the organization of associational connections within the rat entorhinal cortex. First, a comprehensive analysis of the areal and laminar distribution of intrinsic projections was performed by using the anterograde tracers Phaseolus vulgaris-leuocoagglutinin (PHA-L) and biotinylated dextran amine (BDA). Second, retrograde tracers were injected into the dentate gyrus and PHA-L and BDA were injected into the entorhinal cortex to determine the extent to which entorhinal neurons that project to different septotemporal levels of the dentate gyrus are linked by intrinsic connections. The regional distribution of intrinsic projections within the entorhinal cortex was related to the location of the cells of origin along the mediolateral axis of the entorhinal cortex. Cells located in the lateral regions of the entorhinal cortex gave rise to intrinsic connections that largely remained within the lateral reaches of the entorhinal cortex, i.e., within the rostrocaudally situated entorhinal band of cells that projected to septal levels of the dentate gyrus. Cells located in the medial regions of the entorhinal cortex gave rise to intrinsic projections confined to the medial portion of the entorhinal cortex. Injections made into mid-mediolateral regions of the entorhinal cortex mainly gave rise to projections to mid-mediolateral levels, although some fibers did enter either lateral or medial portions of the entorhinal cortex. These patterns were the same regardless of whether the projections originated from the superficial (II-III) or deep (V-VI) layers of the entorhinal cortex. This organizational scheme indicates, and our combined retrograde/anterograde labeling studies confirmed, that laterally situated entorhinal neurons that project to septal levels of the dentate gyrus are not in direct communication with neurons projecting to the temporal portions of the dentate gyrus. These results suggest that entorhinal intrinsic connections allow for both integration (within a band) and segregation (across bands) of entorhinal cortical information processing.", "title": "" }, { "docid": "d7307a2d0c3d4a9622bd8e137e124562", "text": "BACKGROUND\nConsumers of research (researchers, administrators, educators and clinicians) frequently use standard critical appraisal tools to evaluate the quality of published research reports. However, there is no consensus regarding the most appropriate critical appraisal tool for allied health research. We summarized the content, intent, construction and psychometric properties of published, currently available critical appraisal tools to identify common elements and their relevance to allied health research.\n\n\nMETHODS\nA systematic review was undertaken of 121 published critical appraisal tools sourced from 108 papers located on electronic databases and the Internet. The tools were classified according to the study design for which they were intended. Their items were then classified into one of 12 criteria based on their intent. Commonly occurring items were identified. The empirical basis for construction of the tool, the method by which overall quality of the study was established, the psychometric properties of the critical appraisal tools and whether guidelines were provided for their use were also recorded.\n\n\nRESULTS\nEighty-seven percent of critical appraisal tools were specific to a research design, with most tools having been developed for experimental studies. There was considerable variability in items contained in the critical appraisal tools. Twelve percent of available tools were developed using specified empirical research. Forty-nine percent of the critical appraisal tools summarized the quality appraisal into a numeric summary score. Few critical appraisal tools had documented evidence of validity of their items, or reliability of use. Guidelines regarding administration of the tools were provided in 43% of cases.\n\n\nCONCLUSIONS\nThere was considerable variability in intent, components, construction and psychometric properties of published critical appraisal tools for research reports. There is no \"gold standard' critical appraisal tool for any study design, nor is there any widely accepted generic tool that can be applied equally well across study types. No tool was specific to allied health research requirements. Thus interpretation of critical appraisal of research reports currently needs to be considered in light of the properties and intent of the critical appraisal tool chosen for the task.", "title": "" }, { "docid": "6f94fd155f3689ab1a6b242243b13e09", "text": "Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.", "title": "" }, { "docid": "f33147619ba2d24efcea9e32f70c7695", "text": "The wide use of micro bloggers such as Twitter offers a valuable and reliable source of information during natural disasters. The big volume of Twitter data calls for a scalable data management system whereas the semi-structured data analysis requires full-text searching function. As a result, it becomes challenging yet essential for disaster response agencies to take full advantage of social media data for decision making in a near-real-time fashion. In this work, we use Lucene to empower HBase with full-text searching ability to build a scalable social media data analytics system for observing and analyzing human behaviors during the Hurricane Sandy disaster. Experiments show the scalability and efficiency of the system. Furthermore, the discovery of communities has the benefit of identifying influential users and tracking the topical changes as the disaster unfolds. We develop a novel approach to discover communities in Twitter by applying spectral clustering algorithm to retweet graph. The topics and influential users of each community are also analyzed and demonstrated using Latent Semantic Indexing (LSI).", "title": "" }, { "docid": "8a973e6cab1254e6419831c8d96bc93e", "text": "This study, for the first time, distinguishes between nightmares and bad dreams, measures the frequency of each using dream logs, and separately assesses the relation between nightmares, bad dreams, and well-being. Eighty-nine participants completed 7 measures of well-being and recorded their dreams for 4 consecutive weeks. The dream logs yielded estimated mean annual nightmare and bad-dream frequencies that were significantly (ps < .01) greater than the mean 12-month and 1-month retrospective estimates. Nightmare frequency had more significant correlations than bad-dream frequency with well-being, suggesting that nightmares are a more severe expression of the same basic phenomenon. The findings confirm and extend evidence that nightmares are more prevalent than was previously believed and underscore the need to differentiate nightmares from bad dreams.", "title": "" }, { "docid": "ebf92a0faf6538f1d2b85fb2aa497e80", "text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.", "title": "" }, { "docid": "68612f23057840e01bec9673c5d31865", "text": "The current status of studies of online shopping attitudes and behavior is investigated through an analysis of 35 empirical articles found in nine primary Information Systems (IS) journals and three major IS conference proceedings. A taxonomy is developed based on our analysis. A conceptual model of online shopping is presented and discussed in light of existing empirical studies. Areas for further research are discussed.", "title": "" }, { "docid": "3122a2a89992538d81b8b17daef1842c", "text": "The content of the web has increasingly become a focus for academic research. Computer programs are needed in order to conduct any large-scale processing of web pages, requiring the use of a web crawler at some stage in order to fetch the pages to be analysed. The processing of the text of web pages in order to extract information can be expensive in terms of processor time. Consequently a distributed design is proposed in order to effectively use idle computing resources and to help information scientists avoid the need to employ dedicated equipment. A system developed using the model is examined and the advantages and limitations of the approach are discussed.", "title": "" }, { "docid": "b96836da7518ceccace39347f06067c6", "text": "A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.", "title": "" }, { "docid": "a2e2117e3d2a01f2f28835350ba1d732", "text": "Previously, several natural integral transforms of Minkowski question mark function F (x) were introduced by the author. Each of them is uniquely characterized by certain regularity conditions and the functional equation, thus encoding intrinsic information about F (x). One of them the dyadic period function G(z) was defined via certain transcendental integral. In this paper we introduce a family of “distributions” Fp(x) for R p ≥ 1, such that F1(x) is the question mark function and F2(x) is a discrete distribution with support on x = 1. Thus, all the aforementioned integral transforms are calculated for such p. As a consequence, the generating function of moments of F p(x) satisfies the three term functional equation. This has an independent interest, though our main concern is the information it provides about F (x). This approach yields certain explicit series for G(z). This also solves the problem in expressing the moments of F (x) in closed form.", "title": "" }, { "docid": "26065fd6e8451c178cc19d2e71da4cc7", "text": "Urtica dioica or stinging nettle is traditionally used as an herbal medicine in Western Asia. The current study represents the investigation of antimicrobial activity of U. dioica from nine crude extracts that were prepared using different organic solvents, obtained from two extraction methods: the Soxhlet extractor (Method I), which included the use of four solvents with ethyl acetate and hexane, or the sequential partitions (Method II) with a five solvent system (butanol). The antibacterial and antifungal activities of crude extracts were tested against 28 bacteria, three yeast strains and seven fungal isolates by the disc diffusion and broth dilution methods. Amoxicillin was used as positive control for bacteria strains, vancomycin for Streptococcus sp., miconazole nitrate (30 microg/mL) as positive control for fungi and yeast, and pure methanol (v/v) as negative control. The disc diffusion assay was used to determine the sensitivity of the samples, whilst the broth dilution method was used for the determination of the minimal inhibition concentration (MIC). The ethyl acetate and hexane extract from extraction method I (EA I and HE I) exhibited highest inhibition against some pathogenic bacteria such as Bacillus cereus, MRSA and Vibrio parahaemolyticus. A selection of extracts that showed some activity was further tested for the MIC and minimal bactericidal concentrations (MBC). MIC values of Bacillus subtilis and Methicillin-resistant Staphylococcus aureus (MRSA) using butanol extract of extraction method II (BE II) were 8.33 and 16.33mg/mL, respectively; while the MIC value using ethyl acetate extract of extraction method II (EAE II) for Vibrio parahaemolyticus was 0.13mg/mL. Our study showed that 47.06% of extracts inhibited Gram-negative (8 out of 17), and 63.63% of extracts also inhibited Gram-positive bacteria (7 out of 11); besides, statistically the frequency of antimicrobial activity was 13.45% (35 out of 342) which in this among 21.71% belongs to antimicrobial activity extracts from extraction method I (33 out of 152 of crude extracts) and 6.82% from extraction method II (13 out of 190 of crude extracts). However, crude extracts from method I exhibited better antimicrobial activity against the Gram-positive bacteria than the Gram-negative bacteria. The positive results on medicinal plants screening for antibacterial activity constitutes primary information for further phytochemical and pharmacological studies. Therefore, the extracts could be suitable as antimicrobial agents in pharmaceutical and food industry.", "title": "" }, { "docid": "a86a7cafdd464e40c8a9cf8207d249ae", "text": "Mobile marketing offers great opportunities for businesses. Marketing activities supported by mobile devices allow companies to directly communicate with their consumers without time or location barriers. Possibilities for marketers are numerous, but many aspects of mobile marketing still need further investigation. Especially, the topic of mobile advertising (m-advertising) is of major interest. M-advertising addresses consumers with individualized advertising messages via mobile devices. The underlying paper discusses the relevance of m-advertising and investigates how perceived advertising value of mobile marketing can be increased. The analysis is based on a study among consumers. All together a quota sample of 815 mobile phone users was interviewed. The results indicate that the message content is of greatest relevance for the perceived advertising value, while a high frequency of message exposure has a negative impact on it.", "title": "" }, { "docid": "77b78ec70f390289424cade3850fc098", "text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.", "title": "" }, { "docid": "982a30a9738571d2838ac6fc772c55a4", "text": "We are living in a new communication age, which will radically transform the way we live in our society. A world where anything will be connected to Internet is being created, generating an entirely new dynamic network - The Internet of Things (IoT) - enabling new means of communication between people, things and environment. A suitable architecture for Internet of Things (IoT) demands the implementation of several and distinct technologies that range from computing (e.g. Cloud Computing), communications (e.g. 6LowPAN, 3/4G) to semantic (e.g. data mining). Thus, it is necessary to understand very well all these technologies in order to know which of them is most suitable for a given scenario. Therefore, this paper proposes an IoT architecture for disabled people and intends to identify and describe the most relevant IoT technologies and international standards for the stack of the proposed architecture. In particular, the paper discusses the enabling IoT technologies and its feasibility for people with disabilities. At the end, it presents two use cases that are currently being deployed for this population.", "title": "" }, { "docid": "a7c0bdbf05ce5d8da20a80dcc3bfaec0", "text": "Neurosurgery is a medical specialty that relies heavily on imaging. The use of computed tomography and magnetic resonance images during preoperative planning and intraoperative surgical navigation is vital to the success of the surgery and positive patient outcome. Augmented reality application in neurosurgery has the potential to revolutionize and change the way neurosurgeons plan and perform surgical procedures in the future. Augmented reality technology is currently commercially available for neurosurgery for simulation and training. However, the use of augmented reality in the clinical setting is still in its infancy. Researchers are now testing augmented reality system prototypes to determine and address the barriers and limitations of the technology before it can be widely accepted and used in the clinical setting.", "title": "" }, { "docid": "16156f3f821fe6d65c8a753995f50b18", "text": "Memory over commitment enables cloud providers to host more virtual machines on a single physical server, exploiting spare CPU and I/O capacity when physical memory becomes the bottleneck for virtual machine deployment. However, over commiting memory can also cause noticeable application performance degradation. We present Ginkgo, a policy framework for over omitting memory in an informed and automated fashion. By directly correlating application-level performance to memory, Ginkgo automates the redistribution of scarce memory across all virtual machines, satisfying performance and capacity constraints. Ginkgo also achieves memory gains for traditionally fixed-size Java applications by coordinating the redistribution of available memory with the activities of the Java Virtual Machine heap. When compared to a non-over commited system, Ginkgo runs the Day Trader 2.0 and SPEC Web 2009 benchmarks with the same number of virtual machines while saving up to 73% (50% omitting free space) of a physical server's memory while keeping application performance degradation within 7%.", "title": "" }, { "docid": "417fe20322c4458c58553c6d0984cabe", "text": "Neural Turing Machines (NTMs) are an instance of Memory Augmented Neural Networks, a new class of recurrent neural networks which decouple computation from memory by introducing an external memory unit. NTMs have demonstrated superior performance over Long Short-Term Memory Cells in several sequence learning tasks. A number of open source implementations of NTMs exist but are unstable during training and/or fail to replicate the reported performance of NTMs. This paper presents the details of our successful implementation of a NTM. Our implementation learns to solve three sequential learning tasks from the original NTM paper. We find that the choice of memory contents initialization scheme is crucial in successfully implementing a NTM. Networks with memory contents initialized to small constant values converge on average 2 times faster than the next best memory contents initialization scheme.", "title": "" } ]
scidocsrr
abd5e0c3461694f5de54fcc58fc8f0b1
NaLIR: an interactive natural language interface for querying relational databases
[ { "docid": "000961818e2e0e619f1fc0464f69a496", "text": "Database query languages can be intimidating to the non-expert, leading to the immense recent popularity for keyword based search in spite of its significant limitations. The holy grail has been the development of a natural language query interface. We present NaLIX, a generic interactive natural language query interface to an XML database. Our system can accept an arbitrary English language sentence as query input, which can include aggregation, nesting, and value joins, among other things. This query is translated, potentially after reformulation, into an XQuery expression that can be evaluated against an XML database. The translation is done through mapping grammatical proximity of natural language parsed tokens to proximity of corresponding elements in the result XML. In this demonstration, we show that NaLIX, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed features in NaLIX facilitate the interactive query process and improve the usability of the interface.", "title": "" }, { "docid": "026a0651177ee631a80aaa7c63a1c32f", "text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.", "title": "" } ]
[ { "docid": "5666b1a6289f4eac05531b8ff78755cb", "text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.", "title": "" }, { "docid": "47f64720b0526a9141393131921c6e00", "text": "The purpose of this study was to assess relative total body fat and skinfold patterning in Filipino national karate and pencak silat athletes. Participants were members of the Philippine men's and women's national teams in karate (12 males, 5 females) and pencak silat (17 males and 5 females). In addition to age, the following anthropometric measurements were taken: height, body mass, triceps, subscapular, supraspinale, umbilical, anterior thigh and medial calf skinfolds. Relative total body fat was expressed as sum of six skinfolds. Sum of skinfolds and each individual skinfold were also expressed relative to Phantom height. A two-way (Sport*Gender) ANOVA was used to determine the differences between men and women in total body fat and skinfold patterning. A Bonferroni-adjusted alpha was employed for all analyses. The women had a higher proportional sum of skinfols (80.19 ± 25.31 mm vs. 51.77 ± 21.13 mm, p = 0. 001, eta(2) = 0.275). The men had a lower proportional triceps skinfolds (-1.72 ± 0.71 versus - 0.35 ± 0.75, p < 0.001). Collapsed over gender, the karate athletes (-2.18 ± 0.66) had a lower proportional anterior thigh skinfold than their pencak silat colleagues (-1.71 ± 0.74, p = 0.001). Differences in competition requirements between sports may account for some of the disparity in anthropometric measurements. Key PointsThe purpose of the present investigation was to assess relative total body fat and skinfold patterning in Filipino national karate and pencak silat athletes.The results seem to suggest that there was no difference between combat sports in fatness.Skinfold patterning was more in line with what was reported in the literature with the males recording lower extremity fat.", "title": "" }, { "docid": "6b37baf34546ac4a630aa435af4a2284", "text": "The adoption of smartphones, devices transforming from simple communication devices to ‘smart’ and multipurpose devices, is constantly increasing. Amongst the main reasons are their small size, their enhanced functionality and their ability to host many useful and attractive applications. However, this vast use of mobile platforms makes them an attractive target for conducting privacy and security attacks. This scenario increases the risk introduced by these attacks for personal mobile devices, given that the use of smartphones as business tools may extend the perimeter of an organization's IT infrastructure. Furthermore, smartphone platforms provide application developers with rich capabilities, which can be used to compromise the security and privacy of the device holder and her environment (private and/or organizational). This paper examines the feasibility of malware development in smartphone platforms by average programmers that have access to the official tools and programming libraries provided by smartphone platforms. Towards this direction in this paper we initially propose specific evaluation criteria assessing the security level of the well-known smartphone platforms (i.e. Android, BlackBerry, Apple iOS, Symbian, Windows Mobile), in terms of the development of malware. In the sequel, we provide a comparative analysis, based on a proof of concept study, in which the implementation and distribution of a location tracking malware is attempted. Our study has proven that, under circumstances, all smartphone platforms could be used by average developers as privacy attack vectors, harvesting data from the device without the users knowledge and consent.", "title": "" }, { "docid": "e8d0b295658e582e534b9f41b1f14b25", "text": "The rapid development of artificial intelligence has brought the artificial intelligence threat theory as well as the problem about how to evaluate the intelligence level of intelligent products. Both need to find a quantitative method to evaluate the intelligence level of intelligence systems, including human intelligence. Based on the standard intelligence system and the extended Von Neumann architecture, this paper proposes General IQ, Service IQ and Value IQ evaluation methods for intelligence systems, depending on different evaluation purposes. Among them, the General IQ of intelligence systems is to answer the question of whether \"the artificial intelligence can surpass the human intelligence\", which is reflected in putting the intelligence systems on an equal status and conducting the unified evaluation. The Service IQ and Value IQ of intelligence systems are used to answer the question of “how the intelligent products can better serve the human”, reflecting the intelligence and required cost of each intelligence system as a product in the process of serving human. 0. Background With AlphaGo defeating the human Go champion Li Shishi in 2016[1], the worldwide artificial intelligence is developing rapidly. As a result, the artificial intelligence threat theory is widely disseminated as well. At the same time, the intelligent products are flourishing and emerging. Can the artificial intelligence surpass the human intelligence? What level exactly does the intelligence of these intelligent products reach? To answer these questions requires a quantitative method to evaluate the development level of intelligence systems. Since the introduction of the Turing test in 1950, scientists have done a great deal of work on the evaluation system for the development of artificial intelligence[2]. In 1950, Turing proposed the famous Turing experiment, which can determine whether a computer has the intelligence equivalent to that of human with questioning and human judgment method. As the most widely used artificial intelligence test method, the Turing test does not test the intelligence development level of artificial intelligence, but only judges whether the intelligence system can be the same with human intelligence, and depends heavily on the judges’ and testees’ subjective judgments due to too much interference from human factors, so some people often claim their ideas have passed the Turing test, even without any strict verification. On March 24, 2015, the Proceedings of the National Academy of Sciences (PNAS) published a paper proposing a new Turing test method called “Visual Turing test”, which was designed to perform a more in-depth evaluation on the image cognitive ability of computer[3]. In 2014, Mark O. Riedl of the Georgia Institute of Technology believed that the essence of intelligence lied in creativity. He designed a test called Lovelace version 2.0. The test range of Lovelace 2.0 includes the creation of a virtual story novel, poetry, painting and music[4]. There are two problems in various solutions including the Turing test in solving the artificial intelligence quantitative test. Firstly, these test methods do not form a unified intelligent model, nor do they use the model as a basis for analysis to distinguish multiple categories of intelligence, which leads to that it is impossible to test different intelligence systems uniformly, including human; secondly, these test methods can not quantitatively analyze artificial intelligence, or only quantitatively analyze some aspects of intelligence. But what percentage does this system reach to human intelligence? How’s its ratio of speed to the rate of development of human intelligence? All these problems are not covered in the above study. In response to these problems, the author of this paper proposes that: There are three types of IQs in the evaluation of intelligence level for intelligence systems based on different purposes, namely: General IQ, Service IQ and Value IQ. The theoretical basis of the three methods and IQs for the evaluation of intelligence systems, detailed definitions and evaluation methods will be elaborated in the following. 1. Theoretical Basis: Standard Intelligence System and Extended Von Neumann Architecture People are facing two major challenges in evaluating the intelligence level of an intelligence system, including human beings and artificial intelligence systems. Firstly, artificial intelligence systems do not currently form a unified model; secondly, there is no unified model for the comparison between the artificial intelligence systems and the human at present. In response to this problem, the author's research team referred to the Von Neumann Architecture[5], David Wexler's human intelligence model[6], and DIKW model system in the field of knowledge management[7], and put forward a \"standard intelligent model\", which describes the characteristics and attributes of the artificial intelligence systems and the human uniformly, and takes an agent as a system with the abilities of knowledge acquisition, mastery, creation and feedback[8] (see Figure 1). Figure 1 Standard Intelligence Model Based on this model in combination with Von Neumann architecture, an extended Von Neumann architecture can be formed (see Figure 2). Compared to the Von Neumann architecture, this model is added with innovation and creation function that can discover new elements of knowledge and new laws based on the existing knowledge, and make them stored in the storage for use by computers and controllers, and achieve knowledge interaction with the outside through the input / output system. The second addition is an external knowledge database or cloud storage that enables knowledge sharing, whereas the Von Neumann architecture's external storage only serves the single system. A. Arithmetic logic unit D. innovation generator B. Control unitE. input device C. Internal memory unit F. output device Figure 2 Expanded Von Neumann Architecture 2. Definitions of Three IQs of Intelligence System 2.1 Proposal of AI General IQ (AI G IQ) Based on the standard intelligent model, the research team established the AI ​ ​ IQ Test Scale and used it to conduct AI IQ tests on more than 50 artificial intelligence systems including Google, Siri, Baidu, Bing and human groups at the age of 6, 12, and 18 respectively in 2014 and 2016. From the test results, the performance of artificial intelligence systems such as Google and Baidu has been greatly increased from two years ago, but still lags behind the human group at the age of 6[9] (see Table1 and Table 2). Table 1. Ranking of top 13 artificial intelligence IQs for 2014.", "title": "" }, { "docid": "db31e73ce01652b66a2b6a4becffafd7", "text": "A thorough and complete colonoscopy is critically important in preventing colorectal cancer. Factors associated with difficult and incomplete colonoscopy include a poor bowel preparation, severe diverticulosis, redundant colon, looping, adhesions, young and female patients, patient discomfort, and the expertise of the endoscopist. For difficult colonoscopy, focusing on bowel preparation techniques, appropriate sedation and adjunct techniques such as water immersion, abdominal pressure techniques, and patient positioning can overcome many of these challenges. Occasionally, these fail and other alternatives to incomplete colonoscopy have to be considered. If patients have low risk of polyps, then noninvasive imaging options such as computed tomography (CT) or magnetic resonance (MR) colonography can be considered. Novel applications such as Colon Capsule™ and Check-Cap are also emerging. In patients in whom a clinically significant lesion is noted on a noninvasive imaging test or if they are at a higher risk of having polyps, balloon-assisted colonoscopy can be performed with either a single- or double-balloon enteroscope or colonoscope. The application of these techniques enables complete colonoscopic examination in the vast majority of patients.", "title": "" }, { "docid": "329420b8b13e8c315d341e382419315a", "text": "The aim of this research is to design an intelligent system that addresses the problem of real-time localization and navigation of visually impaired (VI) in an indoor environment using a monocular camera. Systems that have been developed so far for the VI use either many cameras (stereo and monocular) integrated with other sensors or use very complex algorithms that are computationally expensive. In this research work, a computationally less expensive integrated system has been proposed to combine imaging geometry, Visual Odometry (VO), Object Detection (OD) along with Distance-Depth (D-D) estimation algorithms for precise navigation and localization by utilizing a single monocular camera as the only sensor. The developed algorithm is tested for both standard Karlsruhe and indoor environment recorded datasets. Tests have been carried out in real-time using a smartphone camera that captures image data of the environment as the person moves and is sent over Wi-Fi for further processing to the MATLAB software model running on an Intel i7 processor. The algorithm provides accurate results on real-time navigation in the environment with an audio feedback about the person's location. The trajectory of the navigation is expressed in an arbitrary scale. Object detection based localization is accurate. The D-D estimation provides distance and depth measurements up to an accuracy of 94–98%.", "title": "" }, { "docid": "4e37fee25234a84a32b2ffc721ade2f8", "text": "Over the last decade, the deep neural networks are a hot topic in machine learning. It is breakthrough technology in processing images, video, speech, text and audio. Deep neural network permits us to overcome some limitations of a shallow neural network due to its deep architecture. In this paper we investigate the nature of unsupervised learning in restricted Boltzmann machine. We have proved that maximization of the log-likelihood input data distribution of restricted Boltzmann machine is equivalent to minimizing the cross-entropy and to special case of minimizing the mean squared error. Thus the nature of unsupervised learning is invariant to different training criteria. As a result we propose a new technique called “REBA” for the unsupervised training of deep neural networks. In contrast to Hinton’s conventional approach to the learning of restricted Boltzmann machine, which is based on linear nature of training rule, the proposed technique is founded on nonlinear training rule. We have shown that the classical equations for RBM learning are a special case of the proposed technique. As a result the proposed approach is more universal in contrast to the traditional energy-based model. We demonstrate the performance of the REBA technique using wellknown benchmark problem. The main contribution of this paper is a novel view and new understanding of an unsupervised learning in deep neural networks.", "title": "" }, { "docid": "d3a6be631dcf65791b4443589acb6880", "text": "We present a deep generative model for Zero-Shot Learning (ZSL). Unlike most existing methods for this problem, that represent each class as a point (via a semantic embedding), we represent each seen/unseen class using a classspecific latent-space distribution, conditioned on class attributes. We use these latent-space distributions as a prior for a supervised variational autoencoder (VAE), which also facilitates learning highly discriminative feature representations for the inputs. The entire framework is learned end-to-end using only the seen-class training data. At test time, the label for an unseen-class test input is the class that maximizes the VAE lower bound. We further extend the model to a (i) semi-supervised/transductive setting by leveraging unlabeled unseen-class data via an unsupervised learning module, and (ii) few-shot learning where we also have a small number of labeled inputs from the unseen classes. We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of benchmark data sets.", "title": "" }, { "docid": "6a1fa32d9a716b57a321561dfce83879", "text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .", "title": "" }, { "docid": "23208f44270f69c4de1640bb1c865a73", "text": "In order to provide a wide variety of mobile services and applications, the fifth-generation (5G) mobile communication system has attracted much attention to improve system capacity much more than the 4G system. The drastic improvement is mainly realized by small/semi-macro cell deployment with much wider bandwidth in higher frequency bands. To cope with larger pathloss in the higher frequency bands, Massive MIMO is one of key technologies to acquire beamforming (BF) in addition to spatial multiplexing. This paper introduces 5G Massive MIMO technologies including high-performance hybrid BF and novel digital BF schemes in addition to distributed Massive MIMO concept with flexible antenna deployment. The latest 5G experimental trials using the Massive MIMO technologies are also shown briefly.", "title": "" }, { "docid": "cb2917b8e6ea5413ef25bb241ff17d1f", "text": "can be found at: Journal of Language and Social Psychology Additional services and information for http://jls.sagepub.com/cgi/alerts Email Alerts: http://jls.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://jls.sagepub.com/cgi/content/refs/23/4/447 SAGE Journals Online and HighWire Press platforms): (this article cites 16 articles hosted on the Citations", "title": "" }, { "docid": "03ff1bdb156c630add72357005a142f5", "text": "Recent advances in media generation techniques have made it easier for attackers to create forged images and videos. Stateof-the-art methods enable the real-time creation of a forged version of a single video obtained from a social network. Although numerous methods have been developed for detecting forged images and videos, they are generally targeted at certain domains and quickly become obsolete as new kinds of attacks appear. The method introduced in this paper uses a capsule network to detect various kinds of spoofs, from replay attacks using printed images or recorded videos to computergenerated videos using deep convolutional neural networks. It extends the application of capsule networks beyond their original intention to the solving of inverse graphics problems.", "title": "" }, { "docid": "00309e5119bb0de1d7b2a583b8487733", "text": "In this paper, we propose a novel Deep Reinforcement Learning framework for news recommendation. Online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences. Although some online recommendation models have been proposed to address the dynamic nature of news recommendation, these methods have three major issues. First, they only try to model current reward (e.g., Click Through Rate). Second, very few studies consider to use user feedback other than click / no click labels (e.g., how frequent user returns) to help improve recommendation. Third, these methods tend to keep recommending similar news to users, which may cause users to get bored. Therefore, to address the aforementioned challenges, we propose a Deep Q-Learning based recommendation framework, which can model future reward explicitly. We further consider user return pattern as a supplement to click / no click label in order to capture more user feedback information. In addition, an effective exploration strategy is incorporated to find new attractive news for users. Extensive experiments are conducted on the offline dataset and online production environment of a commercial news recommendation application and have shown the superior performance of our methods.", "title": "" }, { "docid": "bc9469a9912df59e554c1be99f12d319", "text": "This paper studies the joint learning of action recognition and temporal localization in long, untrimmed videos. We employ a multi-task learning framework that performs the three highly related steps of action proposal, action recognition, and action localization refinement in parallel instead of the standard sequential pipeline that performs the steps in order. We develop a novel temporal actionness regression module that estimates what proportion of a clip contains action. We use it for temporal localization but it could have other applications like video retrieval, surveillance, summarization, etc. We also introduce random shear augmentation during training to simulate viewpoint change. We evaluate our framework on three popular video benchmarks. Results demonstrate that our joint model is efficient in terms of storage and computation in that we do not need to compute and cache dense trajectory features, and that it is several times faster than its sequential ConvNets counterpart. Yet, despite being more efficient, it outperforms stateof-the-art methods with respect to accuracy.", "title": "" }, { "docid": "a3a12def5690cac73226484fe172e9f8", "text": "Solar, wind and hydro are renewable energy sources that are seen as reliable alternatives to conventional energy sources such as oil or natural gas. However, the efficiency and the performance of renewable energy systems are still under development. Consequently, the control structures of the grid-connected inverter as an important section for energy conversion and transmission should be improved to meet the requirements for grid interconnection. In this paper, a comprehensive simulation and implementation of a three-phase grid-connected inverter is presented. The control structure of the grid-side inverter is firstly discussed. Secondly, the space vector modulation SVM is presented. Thirdly, the synchronization for grid-connected inverters is discussed. Finally, the simulation of the grid-connected inverter system using PSIM simulation package and the system implementation are presented to illustrate concepts and compare their results.", "title": "" }, { "docid": "cb7397dedaa92be09dec1f78532b9fc5", "text": "This paper investigates a new strategy for radio resource allocation applying a non-orthogonal multiple access (NOMA) scheme. It calls for the cohabitation of users in the power domain at the transmitter side and for successive interference canceller (SIC) at the receiver side. Taking into account multi-user scheduling, subband assignment and transmit power allocation, a hybrid NOMA scheme is introduced. Adaptive switching to orthogonal signaling (OS) is performed whenever the non-orthogonal cohabitation in the power domain does not improve the achieved data rate per subband. In addition, a new power allocation technique based on waterfilling is introduced to improve the total achieved system throughput. We show that the proposed strategy for resource allocation improves both the spectral efficiency and the cell-edge user throughput. It also proves to be robust in the case of communications in crowded areas.", "title": "" }, { "docid": "a5c054899abf8aa553da4a576577678e", "text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.", "title": "" }, { "docid": "ac41c57bcb533ab5dabcc733dd69a705", "text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.", "title": "" }, { "docid": "6992e0712e99e11b9ebe862c01c0882b", "text": "This paper is in many respects a continuation of the earlier paper by the author published in Proc. R. Soc. A in 1998 entitled ‘A comprehensive methodology for the design of ships (and other complex systems)’. The earlier paper described the approach to the initial design of ships developedby the author during some 35years of design practice, including two previous secondments to teach ship design atUCL.Thepresent paper not only takes thatdevelopment forward, it also explains how the research tool demonstrating the author’s approach to initial ship design has now been incorporated in an industry based design system to provide a working graphically and numerically integrated design system. This achievement is exemplified by a series of practical design investigations, undertaken by the UCL Design Research Centre led by the author, which were mainly undertaken for industry clients in order to investigate real problems towhich the approachhasbrought significant insights.The other new strand in the present paper is the emphasis on the human factors or large scale ergonomics dimension, vital to complex and large scale design products but rarely hitherto beengiven sufficientprominence in the crucial formative stagesof large scale designbecauseof the inherent difficulties in doing so. The UCL Design Building Block approach has now been incorporated in the established PARAMARINE ship design system through a module entitled SURFCON. Work is now underway on an Engineering and Physical Sciences Research Council joint project with the University of Greenwich to interface the latter’s escape simulation toolmaritimeEXODUSwithSURFCONtoprovide initial design guidance to ship designers on personnelmovement. The paper’s concluding section considers the wider applicability of the integration of simulation during initial design with the graphically driven synthesis to other complex and large scale design tasks. The paper concludes by suggesting how such an approach to complex design can contribute to the teaching of designers and, moreover, how this designapproach can enable a creative qualitative approach to engineering design to be sustained despite the risk that advances in computer based methods might encourage emphasis being accorded to solely to quantitative analysis.", "title": "" }, { "docid": "fa7a4970cf70032acfd6bdc383107574", "text": "Alumina-titanium materials (cermets) of enhanced mechanical properties have been lately developed. In this work, physical properties such as electrical conductivity and the crystalline phases in the bulk material are evaluated. As these new cermets manufactured by spark plasma sintering may have potential application for hard tissue replacements, their biocompatibility needs to be evaluated. Thus, this research aims to study the cytocompatibility of a novel alumina-titanium (25 vol. % Ti) cermet compared to its pure counterpart, the spark plasma sintered alumina. The influence of the particular surface properties (chemical composition, roughness and wettability) on the pre-osteoblastic cell response is also analyzed. The material electrical resistance revealed that this cermet may be machined to any shape by electroerosion. The investigated specimens had a slightly undulated topography, with a roughness pattern that had similar morphology in all orientations (isotropic roughness) and a sub-micrometric average roughness. Differences in skewness that implied valley-like structures in the cermet and predominance of peaks in alumina were found. The cermet presented a higher surface hydrophilicity than alumina. Any cytotoxicity risk associated with the new materials or with the innovative manufacturing methodology was rejected. Proliferation and early-differentiation stages of osteoblasts were statistically improved on the composite. Thus, our results suggest that this new multifunctional cermet could improve current alumina-based biomedical devices for applications such as hip joint replacements.", "title": "" } ]
scidocsrr
c01c71105a85901a69dd389c4ff41398
Everything You Wanted to Know About the Blockchain: Its Promise, Components, Processes, and Problems
[ { "docid": "9f6e103a331ab52b303a12779d0d5ef6", "text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.", "title": "" }, { "docid": "24167db00908c65558e8034d94dfb8da", "text": "Due to the wide variety of devices used in computer network systems, cybersecurity plays a major role in securing and improving the performance of the network or system. Although cybersecurity has received a large amount of global interest in recent years, it remains an open research space. Current security solutions in network-based cyberspace provide an open door to attackers by communicating first before authentication, thereby leaving a black hole for an attacker to enter the system before authentication. This article provides an overview of cyberthreats, traditional security solutions, and the advanced security model to overcome current security drawbacks.", "title": "" } ]
[ { "docid": "233c63982527a264b91dfb885361b657", "text": "One unfortunate consequence of the success story of wireless sensor networks (WSNs) in separate research communities is an evergrowing gap between theory and practice. Even though there is a increasing number of algorithmic methods for WSNs, the vast majority has never been tried in practice; conversely, many practical challenges are still awaiting efficient algorithmic solutions. The main cause for this discrepancy is the fact that programming sensor nodes still happens at a very technical level. We remedy the situation by introducing Wiselib, our algorithm library that allows for simple implementations of algorithms onto a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, allowing to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time. The Wiselib runs on different host operating systems, such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. For any algorithm, the Wiselib provides data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks. In this paper, we describe the building blocks of the Wiselib, and analyze the overhead. We demonstrate the effectiveness of our approach by showing how routing algorithms can be implemented. We also report on results from experiments with real sensor-node hardware.", "title": "" }, { "docid": "6d4ba8028f71da5205351be3cff61d6e", "text": "Training robots to perceive, act and communicate using multiple modalities still represents a challenging problem, particularly if robots are expected to learn efficiently from small sets of example interactions. We describe a learning approach as a step in this direction, where we teach a humanoid robot how to play the game of noughts and crosses. Given that multiple multimodal skills can be trained to play this game, we focus our attention to training the robot to perceive the game, and to interact in this game. Our multimodal deep reinforcement learning agent perceives multimodal features and exhibits verbal and non-verbal actions while playing. Experimental results using simulations show that the robot can learn to win or draw up to 98% of the games. A pilot test of the proposed multimodal system for the targeted game—integrating speech, vision and gestures—reports that reasonable and fluent interactions can be achieved using the proposed approach.", "title": "" }, { "docid": "e319eaccc4b013c830553a5d7105ed1e", "text": "With exponential growth in the size of computer networks and developed applications, the significant increasing of the potential damage that can be caused by launching attacks is becoming obvious. Meanwhile, Intrusion Detection Systems (IDSs) and Intrusion Prevention Systems (IPSs) are one of the most important defense tools against the sophisticated and ever-growing network attacks. Due to the lack of adequate dataset, anomaly-based approaches in intrusion detection systems are suffering from accurate deployment, analysis and evaluation. There exist a number of such datasets such as DARPA98, KDD99, ISC2012, and ADFA13 that have been used by the researchers to evaluate the performance of their proposed intrusion detection and intrusion prevention approaches. Based on our study over eleven available datasets since 1998, many such datasets are out of date and unreliable to use. Some of these datasets suffer from lack of traffic diversity and volumes, some of them do not cover the variety of attacks, while others anonymized packet information and payload which cannot reflect the current trends, or they lack feature set and metadata. This paper produces a reliable dataset that contains benign and seven common attack network flows, which meets real world criteria and is publicly avaliable. Consequently, the paper evaluates the performance of a comprehensive set of network traffic features and machine learning algorithms to indicate the best set of features for detecting the certain attack categories.", "title": "" }, { "docid": "62b2daec701f43a3282076639d01e475", "text": "Several hundred plant and herb species that have potential as novel antiviral agents have been studied, with surprisingly little overlap. A wide variety of active phytochemicals, including the flavonoids, terpenoids, lignans, sulphides, polyphenolics, coumarins, saponins, furyl compounds, alkaloids, polyines, thiophenes, proteins and peptides have been identified. Some volatile essential oils of commonly used culinary herbs, spices and herbal teas have also exhibited a high level of antiviral activity. However, given the few classes of compounds investigated, most of the pharmacopoeia of compounds in medicinal plants with antiviral activity is still not known. Several of these phytochemicals have complementary and overlapping mechanisms of action, including antiviral effects by either inhibiting the formation of viral DNA or RNA or inhibiting the activity of viral reproduction. Assay methods to determine antiviral activity include multiple-arm trials, randomized crossover studies, and more compromised designs such as nonrandomized crossovers and pre- and post-treatment analyses. Methods are needed to link antiviral efficacy/potency- and laboratory-based research. Nevertheless, the relative success achieved recently using medicinal plant/herb extracts of various species that are capable of acting therapeutically in various viral infections has raised optimism about the future of phyto-antiviral agents. As this review illustrates, there are innumerable potentially useful medicinal plants and herbs waiting to be evaluated and exploited for therapeutic applications against genetically and functionally diverse viruses families such as Retroviridae, Hepadnaviridae and Herpesviridae", "title": "" }, { "docid": "2493570aa0a224722a07e81c9aab55cd", "text": "A Smart Tailor Platform is proposed as a venue to integrate various players in garment industry, such as tailors, designers, customers, and other relevant stakeholders to automate its business processes. In, Malaysia, currently the processes are conducted manually which consume too much time in fulfilling its supply and demand for the industry. To facilitate this process, a study was conducted to understand the main components of the business operation. The components will be represented using a strategic management tool namely the Business Model Canvas (BMC). The inception phase of the Rational Unified Process (RUP) was employed to construct the BMC. The phase began by determining the basic idea and structure of the business process. The information gathered was classified into nine related dimensions and documented in accordance with the BMC. The generated BMC depicts the relationship of all the nine dimensions for the garment industry, and thus represents an integrated business model of smart tailor. This smart platform allows the players in the industry to promote, manage and fulfill supply and demands of their product electronically. In addition, the BMC can be used to assist developers in designing and developing the smart tailor platform.", "title": "" }, { "docid": "3d3fa5295bfa02ae27ae01adfcc0b560", "text": "In this paper we introduce the simultaneous tracking and activity recognition (STAR) problem, which exploits the synergy between location and activity to provide the information necessary for automatic health monitoring. Automatic health monitoring can potentially help the elderly population live safely and independently in their own homes by providing key information to caregivers. Our goal is to perform accurate tracking and activity recognition for multiple people in a home environment. We use a “bottom-up” approach that primarily uses information gathered by many minimally invasive sensors commonly found in home security systems. We describe a Rao-Blackwellised particle filter for roomlevel tracking, rudimentary activity recognition (i.e., whether or not an occupant is moving), and data association. We evaluate our approach with experiments in a simulated environment and in a real instrumented home.", "title": "" }, { "docid": "04716d649c2fb0a3fa61b026bed80046", "text": "Episodic memory provides a mechanism for accessing past experiences and has been relatively ignored in comp utational models of cognition. In this paper, we present a fr amework for describing the functional stages for computatio nal models of episodic memory: encoding, storage, retrieval an d use of the retrieved memories. We present two implementati ons of a computational model of episodic memory in Soar. We demonstrate all four stages of the model for a simp le interactive task.", "title": "" }, { "docid": "a8ff2ea9e15569de375c34ef252d0dad", "text": "BIM (Building Information Modeling) has been recently implemented by many Architecture, Engineering, and Construction firms due to its productivity gains and long term benefits. This paper presents the development and implementation of a sustainability assessment framework for an architectural design using BIM technology in extracting data from the digital building model needed for determining the level of sustainability. The sustainability assessment is based on the LEED (Leadership in Energy and Environmental Design) Green Building Rating System, a widely accepted national standards for sustainable building design in the United States. The architectural design of a hotel project is used as a case study to verify the applicability of the framework.", "title": "" }, { "docid": "e56b2242eb08ec8b02f8a0353c19761c", "text": "Five experiments examined the effects of environmental context on recall and recognition. In Experiment 1, variability of input environments produced higher free recall performance than unchanged input environments. Experiment 2 showed improvements in cued recall when storage and test contexts matched, using a paradigm that unconfounded the variables of context mismatching and context change. In Experiment 3, recall of categories and recall of words within a category were better for same-context than different-context recall. In Experiment 4, subjects given identical input conditions showed strong effects of environmental context when given a free recall test, yet showed no main effects of context on a recognition test. The absence of an environmental context effect on recognition was replicated in Experiment 5, using a cued recognition task to control the semantic encodings of test words. In the discussion of these experiments, environmental context is compared with other types of context, and an attempt is made to identify the memory processes influenced by environmental context.", "title": "" }, { "docid": "8a28f3ad78a77922fd500b805139de4b", "text": "Sina Weibo is the most popular and fast growing microblogging social network in China. However, more and more spam messages are also emerging on Sina Weibo. How to detect these spam is essential for the social network security. While most previous studies attempt to detect the microblogging spam by identifying spammers, in this paper, we want to exam whether we can detect the spam by each single Weibo message, because we notice that more and more spam Weibos are posted by normal users or even popular verified users. We propose a Weibo spam detection method based on machine learning algorithm. In addition, different from most existing microblogging spam detection methods which are based on English microblogs, our method is designed to deal with the features of Chinese microblogs. Our extensive empirical study shows the effectiveness of our approach.", "title": "" }, { "docid": "bcb1688082db907ceb5cb51cc4df203e", "text": "Decision-making is one of the most important functions of managers in any kind of organization. Among different manager's decisions strategic decision-making is a complex process that must be understood completely before it can be practiced effectively. Those responsible for strategic decision-making face a task of extreme complexity and ambiguity. For these reasons, over the past decades, numerous studies have been conducted to the construction of models to aid managers and executives in making better decisions concerning the complex and highly uncertain business environment. In spite of much work that has been conducted in the area of strategic decision-making especially during the 1990's, we still know little about strategic decision-making process and factors affecting it. This paper builds on previous theoretical and empirical studies to determine the extent to which contextual factors impact the strategic decision-making processes. Results showed that researches on contextual factors effecting strategic decision-making process are either limited or have produced contradictory results, especially studies relating decision’s familiarity, magnitude of impact, organizational size, firm’s performance, dynamism, hostility, heterogeneity, industry, cognitive diversity, cognitive conflict, and manager’s need for achievement to strategic decision-making processes. Thus, the study of strategic decision-making process remains very important and much more empirical research is required before any definitive conclusion can be reached.", "title": "" }, { "docid": "7e74cc21787c1e21fd64a38f1376c6a9", "text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.", "title": "" }, { "docid": "31190e66cb9bff91359f4594623880ad", "text": "This paper reports an ultra-thin MEMS capacitive pressure sensor with high pressure sensitivity of better than 150aF/Pa, and small die size of 1.0mm × 1.0mm × 60µm. It is able to detect ambient pressure change with a resolution of 0.025% in a pressure range +/−3.5KPa. This capacitive pressure sensor decouples the pressure sensing from its capacitance sensing by using a hermetically sealed capacitor that is electrically isolated but mechanically coupled with a pressure sensing diaphragm such that a large dynamic range and high pressure sensitivity can be readily achieved. Because the capacitor is hermetically sealed in a cavity, this capacitive pressure sensor is also immune to measurement media and EMI (Electromagnetic Interference) effects.", "title": "" }, { "docid": "4eaa8c1af7a4f6f6c9de1e6de3f2495f", "text": "Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.", "title": "" }, { "docid": "bd32bda2e79d28122f424ec4966cde15", "text": "This paper holds a survey on plant leaf diseases classification using image processing. Digital image processing has three basic steps: image processing, analysis and understanding. Image processing contains the preprocessing of the plant leaf as segmentation, color extraction, diseases specific data extraction and filtration of images. Image analysis generally deals with the classification of diseases. Plant leaf can be classified based on their morphological features with the help of various classification techniques such as PCA, SVM, and Neural Network. These classifications can be defined various properties of the plant leaf such as color, intensity, dimensions. Back propagation is most commonly used neural network. It has many learning, training, transfer functions which is used to construct various BP networks. Characteristics features are the performance parameter for image recognition. BP networks shows very good results in classification of the grapes leaf diseases. This paper provides an overview on different image processing techniques along with BP Networks used in leaf disease classification.", "title": "" }, { "docid": "c3af6eae1bd5f2901914d830280eca48", "text": "This paper proposes a novel approach for the classification of 3D shapes exploiting surface and volumetric clues inside a deep learning framework. The proposed algorithm uses three different data representations. The first is a set of depth maps obtained by rendering the 3D object. The second is a novel volumetric representation obtained by counting the number of filled voxels along each direction. Finally NURBS surfaces are fitted over the 3D object and surface curvature parameters are selected as the third representation. All the three data representations are fed to a multi-branch Convolutional Neural Network. Each branch processes a different data source and produces a feature vector by using convolutional layers of progressively reduced resolution. The extracted feature vectors are fed to a linear classifier that combines the outputs in order to get the final predictions. Experimental results on the ModelNet dataset show that the proposed approach is able to obtain a state-of-the-art performance.", "title": "" }, { "docid": "06aedeb4933e8f14c53870fd37bf01b0", "text": "It sounds good when knowing the exploratory data mining and data cleaning in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.", "title": "" }, { "docid": "a704582d5a3019a2c714e349347a402e", "text": "Today, money laundering (ML) poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché of drug trafficking to financing terrorism and surely not forgetting personal gain. Most international financial institutions have been implementing anti-money laundering solutions (AML) to fight investment fraud. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting ML activities. Within the scope of a collaboration project for the purpose of developing a new solution for the AML Units in an international investment bank, we proposed a data mining-based solution for AML. In this paper, we present a heuristics approach to improve the performance for this solution. We also show some preliminary results associated with this method on analysing transaction datasets. Keywords—data mining, anti money laundering, clustering, heuristics.", "title": "" }, { "docid": "dd7ab988d8a40e6181cd37f8a1b1acfa", "text": "In areas approaching malaria elimination, human mobility patterns are important in determining the proportion of malaria cases that are imported or the result of low-level, endemic transmission. A convenience sample of participants enrolled in a longitudinal cohort study in the catchment area of Macha Hospital in Choma District, Southern Province, Zambia, was selected to carry a GPS data logger for one month from October 2013 to August 2014. Density maps and activity space plots were created to evaluate seasonal movement patterns. Time spent outside the household compound during anopheline biting times, and time spent in malaria high- and low-risk areas, were calculated. There was evidence of seasonal movement patterns, with increased long-distance movement during the dry season. A median of 10.6% (interquartile range (IQR): 5.8-23.8) of time was spent away from the household, which decreased during anopheline biting times to 5.6% (IQR: 1.7-14.9). The per cent of time spent in malaria high-risk areas for participants residing in high-risk areas ranged from 83.2% to 100%, but ranged from only 0.0% to 36.7% for participants residing in low-risk areas. Interventions targeted at the household may be more effective because of restricted movement during the rainy season, with limited movement between high- and low-risk areas.", "title": "" }, { "docid": "dd723b23b4a7d702f8d34f15b5c90107", "text": "Smartphones have become a prominent part of our technology driven world. When it comes to uncovering, analyzing and submitting evidence in today's criminal investigations, mobile phones play a more critical role. Thus, there is a strong need for software tools that can help investigators in the digital forensics field effectively analyze smart phone data to solve crimes.\n This paper will accentuate how digital forensic tools assist investigators in getting data acquisition, particularly messages, from applications on iOS smartphones. In addition, we will lay out the framework how to build a tool for verifying data integrity for any digital forensics tool.", "title": "" } ]
scidocsrr
f9b7547746046886ca65804f7ffe1405
ASPIER: An Automated Framework for Verifying Security Protocol Implementations
[ { "docid": "2a60bb7773d2e5458de88d2dc0e78e54", "text": "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols.", "title": "" }, { "docid": "d1c46994c5cfd59bdd8d52e7d4a6aa83", "text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.", "title": "" }, { "docid": "7d634a9abe92990de8cb41a78c25d2cc", "text": "We present a new automatic cryptographic protocol verifier based on a simple representation of the protocol by Prolog rules, and on a new efficient algorithm that determines whether a fact can be proved from these rules or not. This verifier proves secrecy properties of the protocols. Thanks to its use of unification, it avoids the problem of the state space explosion. Another advantage is that we do not need to limit the number of runs of the protocol to analyze it. We have proved the correctness of our algorithm, and have implemented it. The experimental results show that many examples of protocols of the literature, including Skeme [24], can be analyzed by our tool with very small resources: the analysis takes from less than 0.1 s for simple protocols to 23 s for the main mode of Skeme. It uses less than 2 Mb of memory in our tests.", "title": "" } ]
[ { "docid": "a61f2e71e0b68d8f4f79bfa33c989359", "text": "Model-based testing relies on behavior models for the generation of model traces: input and expected output---test cases---for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.", "title": "" }, { "docid": "b062222917050f13c3a17e8de53a6abe", "text": "Exposed to traditional language learning strategies, students will gradually lose interest in and motivation to not only learn English, but also any language or culture. Hence, researchers are seeking technology-based learning strategies, such as digital game-mediated language learning, to motivate students and improve learning performance. This paper synthesizes the findings of empirical studies focused on the effectiveness of digital games in language education published within the last five years. Nine qualitative, quantitative, and mixed-method studies are collected and analyzed in this paper. The review found that recent empirical research was conducted primarily to examine the effectiveness by measuring language learning outcomes, motivation, and interactions. Weak proficiency was found in vocabulary retention, but strong proficiency was present in communicative skills such as speaking. Furthermore, in general, students reported that they are motivated to engage in language learning when digital games are involved; however, the motivation is also observed to be weak due to the design of the game and/or individual differences. The most effective method used to stimulate interaction language learning process seems to be digital games, as empirical studies demonstrate that it effectively promotes language education. However, significant work is still required to provide clear answers with respect to innovative and effective learning practice.", "title": "" }, { "docid": "3f0f97dfa920d8abf795ba7f48904a3a", "text": "An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.", "title": "" }, { "docid": "d0c4997c611d8759805d33cf1ad9eef1", "text": "The automatic evaluation of text-based assessment items, such as short answers or essays, is an open and important research challenge. In this paper, we compare several features for the classification of short open-ended responses to questions related to a large first-year health sciences course. These features include a) traditional n-gram models; b) entity URIs (Uniform Resource Identifier) and c) entity mentions extracted using a semantic annotation API; d) entity mention embeddings based on GloVe, and e) entity URI embeddings extracted from Wikipedia. These features are used in combination with classification algorithms to discriminate correct answers from incorrect ones. Our results show that, on average, n-gram features performed the best in terms of precision and entity mentions in terms of f1-score. Similarly, in terms of accuracy, entity mentions and n-gram features performed the best. Finally, features based on dense vector representations such as entity embeddings and mention embeddings obtained the best f1-score for predicting correct answers.", "title": "" }, { "docid": "14636b427ecdab0b0bc73c1948eb8a08", "text": "We review research related to the learning of complex motor skills with respect to principles developed on the basis of simple skill learning. Although some factors seem to have opposite effects on the learning of simple and of complex skills, other factors appear to be relevant mainly for the learning of more complex skills. We interpret these apparently contradictory findings as suggesting that situations with low processing demands benefit from practice conditions that increase the load and challenge the performer, whereas practice conditions that result in extremely high load should benefit from conditions that reduce the load to more manageable levels. The findings reviewed here call into question the generalizability of results from studies using simple laboratory tasks to the learning of complex motor skills. They also demonstrate the need to use more complex skills in motor-learning research in order to gain further insights into the learning process.", "title": "" }, { "docid": "7f9640bc22241bb40154bedcfda33655", "text": "This project aims to detect possible anomalies in the resource consumption of radio base stations within the 4G LTE Radio architecture. This has been done by analyzing the statistical data that each node generates every 15 minutes, in the form of \"performance maintenance counters\". In this thesis, we introduce methods that allow resources to be automatically monitored after software updates, in order to detect any anomalies in the consumption patterns of the different resources compared to the reference period before the update. Additionally, we also attempt to narrow down the origin of anomalies by pointing out parameters potentially linked to the issue.", "title": "" }, { "docid": "e43a39af20f2e905d0bdb306235c622a", "text": "This paper presents a fully integrated remotely powered and addressable radio frequency identification (RFID) transponder working at 2.45 GHz. The achieved operating range at 4 W effective isotropically radiated power (EIRP) base-station transmit power is 12 m. The integrated circuit (IC) is implemented in a 0.5 /spl mu/m silicon-on-sapphire technology. A state-of-the-art rectifier design achieving 37% of global efficiency is embedded to supply energy to the transponder. The necessary input power to operate the transponder is about 2.7 /spl mu/W. Reader to transponder communication is obtained using on-off keying (OOK) modulation while transponder to reader communication is ensured using the amplitude shift keying (ASK) backscattering modulation technique. Inductive matching between the antenna and the transponder IC is used to further optimize the operating range.", "title": "" }, { "docid": "5109aa9328094af5e552ed1cab62f09a", "text": "In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action3D dataset and our algorithm outperforms Li et al. [25] on most of the cases.", "title": "" }, { "docid": "cebb70761a891fd1bce7402c10e7266c", "text": "Abstract: A new approach for mobility, providing an alternative to the private passenger car, by offering the same flexibility but with much less nuisances, is emerging, based on fully automated electric vehicles. A fleet of such vehicles might be an important element in a novel individual, door-to-door, transportation system to the city of tomorrow. For fully automated operation, trajectory planning methods that produce smooth trajectories, with low associated accelerations and jerk, for providing passenger ́s comfort, are required. This paper addresses this problem proposing an approach that consists of introducing a velocity planning stage to generate adequate time sequences for usage in the interpolating curve planners. Moreover, the generated speed profile can be merged into the trajectory for usage in trajectory-tracking tasks like it is described in this paper, or it can be used separately (from the generated 2D curve) for usage in pathfollowing tasks. Three trajectory planning methods, aided by the speed profile planning, are analysed from the point of view of passengers' comfort, implementation easiness, and trajectory tracking.", "title": "" }, { "docid": "5d8fc02f96206da7ccb112866951d4c7", "text": "Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.", "title": "" }, { "docid": "36acc76d232f2f58fcb6b65a1d4027aa", "text": "Surface measurements of the ear are needed to assess damage in patients with disfigurement or defects of the ears and face. Population norms are useful in calculating the amount of tissue needed to rebuild the ear to adequate size and natural position. Anthropometry proved useful in defining grades of severe, moderate, and mild microtia in 73 patients with various facial syndromes. The division into grades was based on the amount of tissue lost and the degree of asymmetry in the position of the ears. Within each grade the size and position of the ears varied greatly. In almost one-third, the nonoperated microtic ears were symmetrically located, promising the best aesthetic results with the least demanding surgical procedures. In slightly over one-third, the microtic ears were associated with marked horizontal and vertical asymmetries. In cases of horizontal and vertical dislocation exceeding 20 mm, surgical correction of the defective facial framework should precede the building up of a new ear. Data on growth and age of maturation of the ears in the normal population can be useful in choosing the optimal time for ear reconstruction.", "title": "" }, { "docid": "2ae58def943d1ae34e1c62663900d64a", "text": "This document outlines a method for implementing an eye tracking device as a method of electrical wheelchair control. Through the use of measured gaze points, it is possible to translate a desired movement into a physical one. This form of interface does not only provide a form of transportation for those with severe disability but also allow the user to get a sense of control back into their lives.", "title": "" }, { "docid": "518e0713115bcaac6efc087d4107d95c", "text": "This paper introduces a device and needed signal processing for high-resolution acoustic imaging in air. The device employs off the shelf audio hardware and linear frequency modulated (LFM) pulse waveform. The image formation is based on the principle of synthetic aperture. The proposed implementation uses inverse filtering method with a unique kernel function for each pixel and focuses a synthetic aperture with no approximations. The method is solid for both far-field and near-field and easily adaptable for different synthetic aperture formation geometries. The proposed imaging is demonstrated via an inverse synthetic aperture formation where the object rotation by a stepper motor provides the required change in aspect angle. Simulated and empirical results are presented. Measurements have been done using a conventional speaker and microphones in an ordinary room with near-field distance and strong static echoes present. The resulting high-resolution 2-D spatial distribution of the acoustic reflectivity provides valuable information for many applications such as object recognition.", "title": "" }, { "docid": "01288eefbf2bc0e8c9dc4b6e0c6d70e9", "text": "The latest discoveries on diseases and their diagnosis/treatment are mostly disseminated in the form of scientific publications. However, with the rapid growth of the biomedical literature and a high level of variation and ambiguity in disease names, the task of retrieving disease-related articles becomes increasingly challenging using the traditional keywordbased approach. An important first step for any disease-related information extraction task in the biomedical literature is the disease mention recognition task. However, despite the strong interest, there has not been enough work done on disease name identification, perhaps because of the difficulty in obtaining adequate corpora. Towards this aim, we created a large-scale disease corpus consisting of 6900 disease mentions in 793 PubMed citations, derived from an earlier corpus. Our corpus contains rich annotations, was developed by a team of 12 annotators (two people per annotation) and covers all sentences in a PubMed abstract. Disease mentions are categorized into Specific Disease, Disease Class, Composite Mention and Modifier categories. When used as the gold standard data for a state-of-the-art machine-learning approach, significantly higher performance can be found on our corpus than the previous one. Such characteristics make this disease name corpus a valuable resource for mining disease-related information from biomedical text. The NCBI corpus is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Fe llows/Dogan/disease.html.", "title": "" }, { "docid": "99f66f4ff6a8548a4cbdac39d5f54cc4", "text": "Dissolution tests that can predict the in vivo performance of drug products are usually called biorelevant dissolution tests. Biorelevant dissolution testing can be used to guide formulation development, to identify food effects on the dissolution and bioavailability of orally administered drugs, and to identify solubility limitations and stability issues. To develop a biorelevant dissolution test for oral dosage forms, the physiological conditions in the gastrointestinal (GI) tract that can affect drug dissolution are taken into consideration according to the properties of the drug and dosage form. A variety of biorelevant methods in terms of media and hydrodynamics to simulate the contents and the conditions of the GI tract are presented. The ability of biorelevant dissolution methods to predict in vivo performance and generate successful in vitro–in vivo correlations (IVIVC) for oral formulations are also discussed through several studies.", "title": "" }, { "docid": "cda5c6908b4f52728659f89bb082d030", "text": "Until a few years ago the diagnosis of hair shaft disorders was based on light microscopy or scanning electron microscopy on plucked or cut samples of hair. Dermatoscopy is a new fast, noninvasive, and cost-efficient technique for easy in-office diagnosis of all hair shaft abnormalities including conditions such as pili trianguli and canaliculi that are not recognizable by examining hair shafts under the light microscope. It can also be used to identify disease limited to the eyebrows or eyelashes. Dermatoscopy allows for fast examination of the entire scalp and is very helpful to identify the affected hair shafts when the disease is focal.", "title": "" }, { "docid": "561320dd717f1a444735dfa322dfbd31", "text": "IEEE 802.11 based WLAN systems have gained interest to be used in the military and public authority environments, where the radio conditions can be harsh due to intentional jamming. The radio environment can be difficult also in commercial and civilian deployments since the unlicensed frequency bands are crowded. To study these problems, we built a test bed with a controlled signal path to measure the effects of different interfering signals to WLAN communications. We use continuous wideband noise jamming as the point of comparison, and focus on studying the effect of pulsed jamming and frequency sweep jamming. In addition, we consider also medium access control (MAC) interference. Based on the results, WLAN systems do not seem to be sensitive to the tested short noise jamming pulses. Under longer pulses, the effects are seen, and long data frames are more vulnerable to jamming than short ones. In fact, even a small amount of long frames in a data stream can ruin the performance of the whole link. Under frequency sweep jamming, slow sweeps with narrowband jamming signals can be quite harmful to WLAN communications. The results of MAC jamming show significant variation in performance between the different devices: The clear channel assessment (CCA) mechanism of some devices can be jammed very easily by using WLAN-like jamming signals. As a side product, the study also revealed some countermeasures against jamming.", "title": "" }, { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "a920ed7775a73791946eb5610387bc23", "text": "A limiting factor for photosynthetic organisms is their light-harvesting efficiency, that is the efficiency of their conversion of light energy to chemical energy. Small modifications or variations of chlorophylls allow photosynthetic organisms to harvest sunlight at different wavelengths. Oxygenic photosynthetic organisms usually utilize only the visible portion of the solar spectrum. The cyanobacterium Acaryochloris marina carries out oxygenic photosynthesis but contains mostly chlorophyll d and only traces of chlorophyll a. Chlorophyll d provides a potential selective advantage because it enables Acaryochloris to use infrared light (700-750 nm) that is not absorbed by chlorophyll a. Recently, an even more red-shifted chlorophyll termed chlorophyll f has been reported. Here, we discuss using modified chlorophylls to extend the spectral region of light that drives photosynthetic organisms.", "title": "" }, { "docid": "fb8638c46ca5bb4a46b1556a2504416d", "text": "In this paper we investigate how a VANET-based traffic information system can overcome the two key problems of strictly limited bandwidth and minimal initial deployment. First, we present a domain specific aggregation scheme in order to minimize the required overall bandwidth. Then we propose a genetic algorithm which is able to identify good positions for static roadside units in order to cope with the highly partitioned nature of a VANET in an early deployment stage. A tailored toolchain allows to optimize the placement with respect to an application-centric objective function, based on travel time savings. By means of simulation we assess the performance of the resulting traffic information system and the optimization strategy.", "title": "" } ]
scidocsrr
890b3fd88530c8f03a6207188d6a32e7
Social LSTM: Human Trajectory Prediction in Crowded Spaces
[ { "docid": "2ea9e1cebaf85f5129a2a5344e02975a", "text": "We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces.", "title": "" }, { "docid": "d2b163b5a37419cf95d7450a05909008", "text": "In this paper we develop a Bayesian nonparametric Inverse Reinforcement Learning technique for switched Markov Decision Processes (MDP). Similar to switched linear dynamical systems, switched MDP (sMDP) can be used to represent complex behaviors composed of temporal transitions between simpler behaviors each represented by a standard MDP. We use sticky Hierarchical Dirichlet Process as a nonparametric prior on the sMDP model space, and describe a Markov Chain Monte Carlo method to efficiently learn the posterior given the behavior data. We demonstrate the effectiveness of sMDP models for learning, prediction and classification of complex agent behaviors in a simulated surveillance scenario.", "title": "" } ]
[ { "docid": "9ee2081e014e2cde151e03a554e09c8e", "text": "The emerging network slicing paradigm for 5G provides new business opportunities by enabling multi-tenancy support. At the same time, new technical challenges are introduced, as novel resource allocation algorithms are required to accommodate different business models. In particular, infrastructure providers need to implement radically new admission control policies to decide on network slices requests depending on their Service Level Agreements (SLA). When implementing such admission control policies, infrastructure providers may apply forecasting techniques in order to adjust the allocated slice resources so as to optimize the network utilization while meeting network slices' SLAs. This paper focuses on the design of three key network slicing building blocks responsible for (i) traffic analysis and prediction per network slice, (ii) admission control decisions for network slice requests, and (iii) adaptive correction of the forecasted load based on measured deviations. Our results show very substantial potential gains in terms of system utilization as well as a trade-off between conservative forecasting configurations versus more aggressive ones (higher gains, SLA risk).", "title": "" }, { "docid": "b5e170645774a92375a0b83e5c6a9743", "text": "Obesity is associated with a state of chronic, low-grade inflammation. Two manuscripts in this issue of the JCI (see the related articles beginning on pages 1796 and 1821) now report that obese adipose tissue is characterized by macrophage infiltration and that these macrophages are an important source of inflammation in this tissue. These studies prompt consideration of new models to include a major role for macrophages in the molecular changes that occur in adipose tissue in obesity.", "title": "" }, { "docid": "73577e88b085e9e187328ce36116b761", "text": "We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.", "title": "" }, { "docid": "abdd1406266d7290166eb16b8a5045a9", "text": "Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.", "title": "" }, { "docid": "655f28b1eeed4c571237474c96ac84a0", "text": "We present six cases of extra-axial lesions: three meningiomas [including one intraventricular and one cerebellopontine angle (CPA) meningioma], one dural metastasis, one CPA schwannoma and one choroid plexus papilloma which were chosen from a larger cohort of extra-axial tumors evaluated in our institution. Apart from conventional MR examinations, all the patients also underwent perfusion-weighted imaging (PWI) using dynamic susceptibility contrast method on a 1.5 T MR unit (contrast: 0.3 mmol/kg, rate 5 ml/s). Though the presented tumors showed very similar appearance on conventional MR images, they differed significantly in perfusion examinations. The article draws special attention to the usefulness of PWI in the differentiation of various extra-axial tumors and its contribution in reaching final correct diagnoses. Finding a dural lesion with low perfusion parameters strongly argues against the diagnosis of meningioma and should raise a suspicion of a dural metastasis. In cases of CPA tumors, a lesion with low relative cerebral blood volume values should be suspected to be schwannoma, allowing exclusion of meningioma to be made. In intraventricular tumors arising from choroid plexus, low perfusion parameters can exclude a diagnosis of meningioma. In our opinion, PWI as an easy and quick to perform functional technique should be incorporated into the MR protocol of all intracranial tumors including extra-axial neoplasms.", "title": "" }, { "docid": "d99181a13ec133373f7fb40f98ea770d", "text": "Fisting is an uncommon and potentially dangerous sexual practice. This is usually a homosexual activity, but can also be a heterosexual or an autoerotic practice. A systematic review of the forensic literature yielded 14 published studies from 8 countries between 1968 and 2016 that met the inclusion/exclusion criteria, illustrating that external anogenital (anal and/or genital) trauma due to fisting is observed in 22.2% and 88.8% (reported consensual and non-consensual intercourse, respectively) of the subjects, while internal injuries are observed in the totality of the patients. Establishing the reliability of the conclusions of these studies is difficult due to a lack of uniformity in methodology used to detect and define injuries. Taking this limit into account, the aim of this article is to give a description of the external and internal injuries subsequent to reported consensual and non-consensual fisting practice, and try to find a relation between this sexual practice, the morphology of the injuries, the correlation with the use of drugs, and the relationship with assailant, where possible. The findings reported in this paper could be useful, especially when concerns of sexual assault arise.", "title": "" }, { "docid": "21c1493a2de747f9b5878648ee95d470", "text": "In this summary of previous work, I argue that data becomes temporarily interesting by itself to some selfimproving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively more “beautiful.” Curiosity is the desire to create or discover more non-random, non-arbitrary, “truly novel,” regular data that allows for compression progress because its regularity was not yet known. This drive maximizes “interestingness,” the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and recent artificial systems.", "title": "" }, { "docid": "868c3c6de73d53f54ca6090e9559007f", "text": "To generate useful summarization of data while maintaining privacy of sensitive information is a challenging task, especially in the big data era. The privacy-preserving principal component algorithm proposed in [1] is a promising approach when a low rank data summarization is desired. However, the analysis in [1] is limited to the case of a single principal component, which makes use of bounds on the vector-valued Bingham distribution in the unit sphere. By exploring the non-commutative structure of data matrices in the full Stiefel manifold, we extend the analysis to an arbitrary number of principal components. Our results are obtained by analyzing the asymptotic behavior of the matrix-variate Bingham distribution using tools from random matrix theory.", "title": "" }, { "docid": "d07416d917175d6bf809c4cefeeb44a3", "text": "Extracting relevant information in multilingual context from massive amounts of unstructured, structured and semi-structured data is a challenging task. Various theories have been developed and applied to ease the access to multicultural and multilingual resources. This papers describes a methodology for the development of an ontology-based Cross-Language Information Retrieval (CLIR) application and shows how it is possible to achieve the translation of Natural Language (NL) queries in any language by means of a knowledge-driven approach which allows to semi-automatically map natural language to formal language, simplifying and improving in this way the human-computer interaction and communication. The outlined research activities are based on Lexicon-Grammar (LG), a method devised for natural language formalization, automatic textual analysis and parsing. Thanks to its main characteristics, LG is independent from factors which are critical for other approaches, i.e. interaction type (voice or keyboard-based), length of sentences and propositions, type of vocabulary used and restrictions due to users' idiolects. The feasibility of our knowledge-based methodological framework, which allows mapping both data and metadata, will be tested for CLIR by implementing a domain-specific early prototype system.", "title": "" }, { "docid": "2d93bec323bb5e534a1c6256bf324e76", "text": "MRI has been increasingly used for detailed visualization of the fetus in utero as well as pregnancy structures. Yet, the familiarity of radiologists and clinicians with fetal MRI is still limited. This article provides a practical approach to fetal MR imaging. Fetal MRI is an interactive scanning of the moving fetus owed to the use of fast sequences. Single-shot fast spin-echo (SSFSE) T2-weighted imaging is a standard sequence. T1-weighted sequences are primarily used to demonstrate fat, calcification and hemorrhage. Balanced steady-state free-precession (SSFP), are beneficial in demonstrating fetal structures as the heart and vessels. Diffusion weighted imaging (DWI), MR spectroscopy (MRS), and diffusion tensor imaging (DTI) have potential applications in fetal imaging. Knowing the developing fetal MR anatomy is essential to detect abnormalities. MR evaluation of the developing fetal brain should include recognition of the multilayered-appearance of the cerebral parenchyma, knowledge of the timing of sulci appearance, myelination and changes in ventricular size. With advanced gestation, fetal organs as lungs and kidneys show significant changes in volume and T2-signal. Through a systematic approach, the normal anatomy of the developing fetus is shown to contrast with a wide spectrum of fetal disorders. The abnormalities displayed are graded in severity from simple common lesions to more complex rare cases. Complete fetal MRI is fulfilled by careful evaluation of the placenta, umbilical cord and amniotic cavity. Accurate interpretation of fetal MRI can provide valuable information that helps prenatal counseling, facilitate management decisions, guide therapy, and support research studies.", "title": "" }, { "docid": "5cec29bc44da28160d99530d8813da47", "text": "There are a variety of application areas in which there is a ne ed for simplifying complex polygonal surface models. These mo dels often have material properties such as colors, textures, an d surface normals. Our surface simplification algorithm, based on ite rative edge contraction and quadric error metrics, can rapidly pro duce high quality approximations of such models. We present a nat ural extension of our original error metric that can account for a wide range of vertex attributes. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representat io s", "title": "" }, { "docid": "54a54f09781bc09dccaa6555535099a4", "text": "Tax revenue has a very important role to fund the State's finances. In order for the optimal tax revenue, the tax authorities must perform tax supervision to the taxpayers optimally. By using the self-assessment taxation system that is taxpayers calculation, pay and report their own tax obligations added with the data of other parties will create a very large data. Therefore, the tax authorities are required to immediately know the taxpayer non-compliance for further audit. This research uses the classification algorithm C4.5, SVM (Support Vector Machine), KNN (K-Nearest Neighbor), Naive Bayes and MLP (Multilayer Perceptron) to classify the level of taxpayer compliance with four goals that are corporate taxpayers comply formally and materially required, corporate taxpayers comply formally required, corporate taxpayers comply materially required and corporate taxpayers not comply formally and materially required. The classification results of each algorithm are compared and the best algorithm chosen based on criteria F-Score, Accuracy and Time taken to build the model by using fuzzy TOPSIS method. The final result shows that C4.5 algorithm is the best algorithm to classify taxpayer compliance level compared to other algorithms.", "title": "" }, { "docid": "e4c493697d9bece8daec6b2dd583e6bb", "text": "High dimensionality of the feature space is one of the most important concerns in text classification problems due to processing time and accuracy considerations. Selection of distinctive features is therefore essential for text classification. This study proposes a novel filter based probabilistic feature selection method, namely distinguishing feature selector (DFS), for text classification. The proposed method is compared with well-known filter approaches including chi square, information gain, Gini index and deviation from Poisson distribution. The comparison is carried out for different datasets, classification algorithms, and success measures. Experimental results explicitly indicate that DFS offers a competitive performance with respect to the abovementioned approaches in terms of classification accuracy, dimension reduction rate and processing time.", "title": "" }, { "docid": "15208617386aeb77f73ca7c2b7bb2656", "text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.", "title": "" }, { "docid": "9bb86141611c54978033e2ea40f05b15", "text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.", "title": "" }, { "docid": "f50d0948319a4487b43b94bac09e5fab", "text": "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "title": "" }, { "docid": "2283e43c2bad5ac682fe185cb2b8a9c1", "text": "As widely recognized in the literature, information technology (IT) investments have several special characteristics that make assessing their costs and benefits complicated. Here, we address the problem of evaluating a web content management system for both internal and external use. The investment is presently undergoing an evaluation process in a multinational company. We aim at making explicit the desired benefits and expected risks of the system investment. An evaluation hierarchy at general level is constructed. After this, a more detailed hierarchy is constructed to take into account the contextual issues. To catch the contextual issues key company representatives were interviewed. The investment alternatives are compared applying the principles of the Analytic Hierarchy Process (AHP). Due to the subjective and uncertain characteristics of the strategic IT investments a wide range of sensitivity analyses is performed.", "title": "" }, { "docid": "486417082d921eba9320172a349ee28f", "text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.", "title": "" }, { "docid": "c2f620287606a2e233e2d3654c64c016", "text": "Urban terrain is complex and they present a very challenging and difficult environment for simulating virtual forces as well as for rendering. The objective of this work is to research on Binary Space Partition technique (BSP) for modeling urban terrain environments. BSP is a method for recursively subdividing a space into convex sets by hyper-planes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree. Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.", "title": "" } ]
scidocsrr
c6eae3ba93282d8f44b108c1c4aeb5bf
HySAD: a semi-supervised hybrid shilling attack detector for trustworthy product recommendation
[ { "docid": "048ff79b90371eb86b9d62810cfea31f", "text": "In October, 2006 Netflix released a dataset containing 100 million anonymous movie ratings and challenged the data mining, machine learning and computer science communities to develop systems that could beat the accuracy of its recommendation system, Cinematch. We briefly describe the challenge itself, review related work and efforts, and summarize visible progress to date. Other potential uses of the data are outlined, including its application to the KDD Cup 2007.", "title": "" }, { "docid": "49a041e18a063876dc595f33fe8239a8", "text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. These vulnerabilities mostly emanate from the open nature of such systems and their reliance on userspecified judgments for building profiles. Attackers can easily introduce biased data in an attempt to force the system to “adapt” in a manner advantageous to them. Our research in secure personalization is examining a range of attack models, from the simple to the complex, and a variety of recommendation techniques. In this chapter, we explore an attack model that focuses on a subset of users with similar tastes and show that such an attack can be highly successful against both user-based and item-based collaborative filtering. We also introduce a detection model that can significantly decrease the impact of this attack.", "title": "" } ]
[ { "docid": "f1052f4704b5ec55e2a131dc2f2d6afc", "text": "A simple control for a permanent motor drive is described which provides a wide speed range without the use of a shaft sensor. Two line-to-line voltages and two stator currents are sensed and processed in analog form to produce the stator flux linkage space vector. The angle of this vector is then used in a microcontroller to produce the appropriate stator current command signals for the hysteresis current controller of the inverter so that near unity power factor can be achieved over a wide range of torque and speed. A speed signal is derived from the rate of change of angle of the flux linkage. A drift compensation program is proposed to avoid calculation errors in the determination of angle position and speed. The control system has been implemented on a 5 kW motor using Nd-Fe-B magnets. The closed loop speed control has been shown to be effective down to a frequency of less than 1 Hz, thus providing a wide range of speed control. An open loop starting program is used to accelerate the motor up to this limit frequency with minimum speed oscillation.<<ETX>>", "title": "" }, { "docid": "4186e2c50355516bf8860a7fea4415cc", "text": "Performing approximate data matching has always been an intriguing problem for both industry and academia. This task becomes even more challenging when the requirement of data privacy rises. In this paper, we propose a novel technique to address the problem of efficient privacy-preserving approximate record linkage. The secure framework we propose consists of two basic components. First, we utilize a secure blocking component based on phonetic algorithms statistically enhanced to improve security. Second, we use a secure matching component where actual approximate matching is performed using a novel private approach of the Levenshtein Distance algorithm. Our goal is to combine the speed of private blocking with the increased accuracy of approximate secure matching. Category: Ubiquitous computing; Security and privacy", "title": "" }, { "docid": "05527d807914ad45b321c8e512fbd346", "text": "www.frontiersinecology.org © The Ecological Society of America S research can provide important and timely insights into environmental issues, but scientists face many personal and institutional challenges to effectively synthesize and transmit their findings to relevant stakeholders. In this paper, we address how “interface” or “boundary” organizations – organizations created to foster the use of science knowledge in environmental policy making and environmental management, as well as to encourage changes in behavior, further learning, inquiry, discovery, or enjoyment – can help scientists improve and facilitate effective communication and the application of scientific information (Gieryn 1999). Interface organizations are synergistic and operate across a range of scales, purposes, and intensities of information flow between scientists and audiences. Considerable attention has focused on how to involve scientists in the decision-making process regarding natural resource management issues related to their area of expertise (Andersson 2004; Roth et al. 2004; Rinaudo and Garin 2005; Bacic et al. 2006; Olsson and Andersson 2007). These efforts have resulted in scientific input to environmental issues, including ecosystem management (Meffe et al. 2002), adaptive collaborative management (Buck et al. 2001; Colfer 2005), and integrated watershed management (Jeffrey and Gearey 2006). A common element of many of these approaches is the use of an organization or group to manage and facilitate the interaction between the scientists and the “users” or “managers” of a natural resource. Cash et al. (2003) identified key functions of successful “boundary management” organizations. These functions include communication, translation, and mediation (convening groups, as well as resolving differences). Successful efforts are characterized by having clear lines of responsibility and accountability on both sides of the boundary, and by providing a forum in which information can be co-produced by scientists and information users. Interface organizations typically: (1) Engage: seeking out scientists with important findings and then building or filling a demand for their insights among different communities and for various niches, contexts, and scales. The organization usually serves as a convener. SCIENCE, COMMUNICATION, AND CONTROVERSIES", "title": "" }, { "docid": "04afc062996d9db91168116347819ddd", "text": "BACKGROUND\nThis study investigated the role of Sirtuin 1 (SIRT1)/forkhead box O3 (FOXO3) pathway, and a possible protective function for Icariin (ICA), in intestinal ischemia-reperfusion (I/R) injury and hypoxia-reoxygenation (H/R) injury.\n\n\nMATERIALS AND METHODS\nMale Sprague-Dawley rats were pretreated with different doses of ICA (30 and 60 mg/kg) or olive oil as control 1 h before intestinal I/R. Caco-2 cells were pretreated with different concentrations of ICA (25, 50, and 100 μg/mL) and then subjected to H/R-induced injury.\n\n\nRESULTS\nThe in vivo results demonstrated that ICA pretreatment significantly improved I/R-induced tissue damage and decreased serum tumor necrosis factor α and interleukin-6 levels. Changes of manganese superoxide dismutase, Bcl-2, and Bim were also reversed by ICA, and apoptosis was reduced. Importantly, the protective effects of ICA were positively associated with SIRT1 activation. Increased SIRT1 expression, as well as decreased acetylated FOXO3 expression, was observed in Caco-2 cells pretreated with ICA. Additionally, the protective effects of ICA were abrogated in the presence of SIRT1 inhibitor nicotinamide. This suggests that ICA exerts a protective effect upon H/R injury through activation of SIRT1/FOXO3 signaling pathway. Accordingly, the SIRT1 activator resveratrol achieved a similar protective effect as ICA on H/R injury, whereas cellular damage resulting from H/R was exacerbated by SIRT1 knockdown and nicotinamide.\n\n\nCONCLUSIONS\nSIRT1, activated by ICA, protects intestinal epithelial cells from I/R injury by inducing FOXO3 deacetylation both in vivo and in vitro These findings suggest that the SIRT1/FOXO3 pathway can be a target for therapeutic approaches intended to minimize injury resulting from intestinal dysfunction.", "title": "" }, { "docid": "218bb1cf213a84f758f222a96ee19fd1", "text": "The cytokinesis-block micronucleus cytome (CBMN Cyt) assay is one of the best-validated methods for measuring chromosome damage in human lymphocytes. This paper describes the methodology, biology, and mechanisms underlying the application of this technique for biodosimetry following exposure to ionizing radiation. Apart from the measurement of micronuclei, it is also possible to measure other important biomarkers within the CBMN Cyt assay that are relevant to radiation biodosimetry. These include nucleoplasmic bridges, which are an important additional measure of radiation-induced damage that originate from dicentric chromosomes as well as the proportion of dividing cells and cells undergoing cell death. A brief account is also given of current developments in the automation of this technique and important knowledge gaps that need attention to further enhance the applicability of this important method for radiation biodosimetry.", "title": "" }, { "docid": "015dbd7c7d1011802046f9b24df24280", "text": "The Resource Description Framework (RDF) provides a common data model for the integration of “real-time” social and sensor data streams with the Web and with each other. While there exist numerous protocols and data formats for exchanging dynamic RDF data, or RDF updates, these options should be examined carefully in order to enable a Semantic Web equivalent of the high-throughput, low-latency streams of typical Web 2.0, multimedia, and gaming applications. This paper contains a brief survey of RDF update formats and a high-level discussion of both TCP and UDPbased transport protocols for updates. Its main contribution is the experimental evaluation of a UDP-based architecture which serves as a real-world example of a high-performance RDF streaming application in an Internet-scale distributed environment.", "title": "" }, { "docid": "541055772a5c2bed70649d2ca9a6c584", "text": "This report discusses methods for forecasting hourly loads of a US utility as part of the load forecasting track of the Global Energy Forecasting Competition 2012 hosted on Kaggle. The methods described (gradient boosting machines and Gaussian processes) are generic machine learning / regression algorithms and few domain specific adjustments were made. Despite this, the algorithms were able to produce highly competitive predictions and hopefully they can inspire more refined techniques to compete with state-of-the-art load forecasting methodologies.", "title": "" }, { "docid": "e3747bf4694854d0a38d73de5d478f17", "text": "Virtual Reality (VR) is starting to be used in psychological therapy around the world. However, a thorough understanding of the reason why VR is effective and what effect it has on the human psyche is still missing. Most research on this subject is related to the concept of presence. This paper gives an up-to-date overview of research in this diverse field. It starts with the most prevailing definitions and theories on presence, most of which attribute special roles for the mental process of attention and for mental models of the virtual space. A review of the phenomena thought to be effected by presence shows that there is still a strong need for research on this subject because little conclusive evidence exists regarding the relationship between presence and phenoma such as emotional responses to virtual stimuli. An investigation shows there has been substantial research for developing methods for measuring presence and research regarding factors that contribute to presence. Knowledge of these contributing factors can play a vital role in development of new VR applications, but key knowledge elements in this area are still missing.", "title": "" }, { "docid": "25a7f23c146add12bfab3f1fc497a065", "text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).", "title": "" }, { "docid": "22ef6b3fd2f4c926d81881039244511f", "text": "Whereas in most cases a fatty liver remains free of inflammation, 10%-20% of patients who have fatty liver develop inflammation and fibrosis (nonalcoholic steatohepatitis [NASH]). Inflammation may precede steatosis in certain instances. Therefore, NASH could reflect a disease where inflammation is followed by steatosis. In contrast, NASH subsequent to simple steatosis may be the consequence of a failure of antilipotoxic protection. In both situations, many parallel hits derived from the gut and/or the adipose tissue may promote liver inflammation. Endoplasmic reticulum stress and related signaling networks, (adipo)cytokines, and innate immunity are emerging as central pathways that regulate key features of NASH.", "title": "" }, { "docid": "4f11ddc3fdcbf997efe0cafaed09f0f0", "text": "This paper proposes an area-based stereo algorithm suitable to real time applications. The core of the algorithm relies on the uniqueness constraint and on a matching process that rejects previous matches as soon as more reliable ones are found. The proposed approach is also compared with bidirectional matching (BM), since the latter is the basic method for detecting unreliable matches in most area-based stereo algorithms. We describe the algorithm’s matching core, the additional constraints introduced to improve the reliability and the computational optimizations carried out to achieve a very fast implementation. We provide a large set of experimental results, obtained on a standard set of images with ground-truth as well as on stereo sequences, and computation time measurements. These data are used to evaluate the proposed algorithm and compare it with a well-known algorithm based on BM. q 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "513b378c3fc2e2e6f23a406b63dc33a9", "text": "Mining frequent itemsets from the large transactional database is a very critical and important task. Many algorithms have been proposed from past many years, But FP-tree like algorithms are considered as very effective algorithms for efficiently mine frequent item sets. These algorithms considered as efficient because of their compact structure and also for less generation of candidates itemsets compare to Apriori and Apriori like algorithms. Therefore this paper aims to presents a basic Concepts of some of the algorithms (FP-Growth, COFI-Tree, CT-PRO) based upon the FPTree like structure for mining the frequent item sets along with their capabilities and comparisons.", "title": "" }, { "docid": "b6c1aa9e3b55b6ad7bd01f8b1c017e7b", "text": "In the last decade, with availability of large datasets and more computing power, machine learning systems have achieved (super)human performance in a wide variety of tasks. Examples of this rapid development can be seen in image recognition, speech analysis, strategic game planning and many more. The problem with many state-of-the-art models is a lack of transparency and interpretability. The lack of thereof is a major drawback in many applications, e.g. healthcare and finance, where rationale for model's decision is a requirement for trust. In the light of these issues, explainable artificial intelligence (XAI) has become an area of interest in research community. This paper summarizes recent developments in XAI in supervised learning, starts a discussion on its connection with artificial general intelligence, and gives proposals for further research directions.", "title": "" }, { "docid": "38438e6a0bd03ad5f076daa1f248d001", "text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.", "title": "" }, { "docid": "f3a49052d58bb266fa45c348ad47b549", "text": "Deep learning models based on CNNs are predominantly used in image classification tasks. Such approaches, assuming independence of object categories, normally use a CNN as a feature learner and apply a flat classifier on top of it. Object classes in many settings have hierarchical relations, and classifiers exploiting these relations should perform better. We propose hierarchical classification models combining a CNN to extract hierarchical representations of images, and an RNN or sequence-to-sequence model to capture a hierarchical tree of classes. In addition, we apply residual learning to the RNN part in oder to facilitate training our compound model and improve generalization of the model. Experimental results on a real world proprietary dataset of images show that our hierarchical networks perform better than state-of-the-art CNNs.", "title": "" }, { "docid": "fb38bdc5772975f9705b2ca90f819b25", "text": "We propose a general approach to the gaze redirection problem in images that utilizes machine learning. The idea is to learn to re-synthesize images by training on pairs of images with known disparities between gaze directions. We show that such learning-based re-synthesis can achieve convincing gaze redirection based on monocular input, and that the learned systems generalize well to people and imaging conditions unseen during training. We describe and compare three instantiations of our idea. The first system is based on efficient decision forest predictors and redirects the gaze by a fixed angle in real-time (on a single CPU), being particularly suitable for the videoconferencing gaze correction. The second system is based on a deep architecture and allows gaze redirection by a range of angles. The second system achieves higher photorealism, while being several times slower. The third system is based on real-time decision forests at test time, while using the supervision from a “teacher” deep network during training. The third system approaches the quality of a teacher network in our experiments, and thus provides a highly realistic real-time monocular solution to the gaze correction problem. We present in-depth assessment and comparisons of the proposed systems based on quantitative measurements and a user study.", "title": "" }, { "docid": "9252ee5085159f4665e74caf8e7f4110", "text": "Intrabody communication (IBC) uses the human body as a signal transmission medium. In the capacitive coupling IBC approach, the signal is transmitted through the body, and the signal return path is closed through the environment. The received signal level is affected by the orientation of the transmitter with respect to the receiver, the number of ground electrodes connected to the body, the size of the receiver ground plane, and the surrounding environment. In this paper, we present a characterization of the capacitive IBC channel in the frequency range from 100 kHz to 100 MHz, obtained using a network analyzer and a pair of baluns. In order to better understand the transmission path in the frequency range of interest, we analyze the intrabody channel transmission characteristics using different electrode arrangements, test persons, environments, and body positions and movements. The transmission gain increases with frequency for 20 dB/dec and depends on the transmitter to the receiver distance, and the electrode arrangements. For a proper IBC configuration, the variations of the environment, test persons, body positions, and movements affect the transmission gain less than 2 dB.", "title": "" }, { "docid": "30ba7b3cf3ba8a7760703a90261d70eb", "text": "Starch is a major storage product of many economically important crops such as wheat, rice, maize, tapioca, and potato. A large-scale starch processing industry has emerged in the last century. In the past decades, we have seen a shift from the acid hydrolysis of starch to the use of starch-converting enzymes in the production of maltodextrin, modified starches, or glucose and fructose syrups. Currently, these enzymes comprise about 30% of the world’s enzyme production. Besides the use in starch hydrolysis, starch-converting enzymes are also used in a number of other industrial applications, such as laundry and porcelain detergents or as anti-staling agents in baking. A number of these starch-converting enzymes belong to a single family: the -amylase family or family13 glycosyl hydrolases. This group of enzymes share a number of common characteristics such as a ( / )8 barrel structure, the hydrolysis or formation of glycosidic bonds in the conformation, and a number of conserved amino acid residues in the active site. As many as 21 different reaction and product specificities are found in this family. Currently, 25 three-dimensional (3D) structures of a few members of the -amylase family have been determined using protein crystallization and X-ray crystallography. These data in combination with site-directed mutagenesis studies have helped to better understand the interactions between the substrate or product molecule and the different amino acids found in and around the active site. This review illustrates the reaction and product diversity found within the -amylase family, the mechanistic principles deduced from structure–function relationship structures, and the use of the enzymes of this family in industrial applications. © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "2bb988a1d2b3269e7ebe989a65f44487", "text": "The future connectivity landscape and, notably, the 5G wireless systems will feature Ultra-Reliable Low Latency Communication (URLLC). The coupling of high reliability and low latency requirements in URLLC use cases makes the wireless access design very challenging, in terms of both the protocol design and of the associated transmission techniques. This paper aims to provide a broad perspective on the fundamental tradeoffs in URLLC as well as the principles used in building access protocols. Two specific technologies are considered in the context of URLLC: massive MIMO and multi-connectivity, also termed interface diversity. The paper also touches upon the important question of the proper statistical methodology for designing and assessing extremely high reliability levels.", "title": "" }, { "docid": "f3cfd3e026c368146102185c31761fd2", "text": "In this paper, we summarize the human emotion recognition using different set of electroencephalogram (EEG) channels using discrete wavelet transform. An audio-visual induction based protocol has been designed with more dynamic emotional content for inducing discrete emotions (disgust, happy, surprise, fear and neutral). EEG signals are collected using 64 electrodes from 20 subjects and are placed over the entire scalp using International 10-10 system. The raw EEG signals are preprocessed using Surface Laplacian (SL) filtering method and decomposed into three different frequency bands (alpha, beta and gamma) using Discrete Wavelet Transform (DWT). We have used “db4” wavelet function for deriving a set of conventional and modified energy based features from the EEG signals for classifying emotions. Two simple pattern classification methods, K Nearest Neighbor (KNN) and Linear Discriminant Analysis (LDA) methods are used and their performances are compared for emotional states classification. The experimental results indicate that, one of the proposed features (ALREE) gives the maximum average classification rate of 83.26% using KNN and 75.21% using LDA compared to those of conventional features. Finally, we present the average classification rate and subsets of emotions classification rate of these two different classifiers for justifying the performance of our emotion recognition system.", "title": "" } ]
scidocsrr
311172e6662a2d88ccafb0f07613bf35
Multiple Arousal Theory and Daily-Life Electrodermal Activity Asymmetry
[ { "docid": "d76e649c6daeb71baf377c2b36623e29", "text": "The somatic marker hypothesis proposes that decision-making is a process that depends on emotion. Studies have shown that damage of the ventromedial prefrontal (VMF) cortex precludes the ability to use somatic (emotional) signals that are necessary for guiding decisions in the advantageous direction. However, given the role of the amygdala in emotional processing, we asked whether amygdala damage also would interfere with decision-making. Furthermore, we asked whether there might be a difference between the roles that the amygdala and VMF cortex play in decision-making. To address these two questions, we studied a group of patients with bilateral amygdala, but not VMF, damage and a group of patients with bilateral VMF, but not amygdala, damage. We used the \"gambling task\" to measure decision-making performance and electrodermal activity (skin conductance responses, SCR) as an index of somatic state activation. All patients, those with amygdala damage as well as those with VMF damage, were (1) impaired on the gambling task and (2) unable to develop anticipatory SCRs while they pondered risky choices. However, VMF patients were able to generate SCRs when they received a reward or a punishment (play money), whereas amygdala patients failed to do so. In a Pavlovian conditioning experiment the VMF patients acquired a conditioned SCR to visual stimuli paired with an aversive loud sound, whereas amygdala patients failed to do so. The results suggest that amygdala damage is associated with impairment in decision-making and that the roles played by the amygdala and VMF in decision-making are different.", "title": "" } ]
[ { "docid": "1ace2a8a8c6b4274ac0891e711d13190", "text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.", "title": "" }, { "docid": "305ae3e7a263bb12f7456edca94c06ca", "text": "We study the effects of changes in uncertainty about future fiscal policy on aggregate economic activity. In light of large fiscal deficits and high public debt levels in the U.S., a fiscal consolidation seems inevitable. However, there is notable uncertainty about the policy mix and timing of such a budgetary adjustment. To evaluate the consequences of the increased uncertainty, we first estimate tax and spending processes for the U.S. that allow for timevarying volatility. We then feed these processes into an otherwise standard New Keynesian business cycle model calibrated to the U.S. economy. We find that fiscal volatility shocks can have a sizable adverse effect on economic activity.", "title": "" }, { "docid": "7437f0c8549cb8f73f352f8043a80d19", "text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.", "title": "" }, { "docid": "efc7adc3963e7ccb0e2f1297a81005b2", "text": "data types Reasoning Englis guitarists Academic degrees Companies establishe... Cubes Internet radio Loc l authorities ad... Figure 5: Topic coverage of LAK data graph for the individual resources. 5. RELATED WORK Cobo et al.[3] presents an analysis of student participation in online discussion forums using an agglomerative hierarchical clustering algorithm, and explore the profiles to find relevant activity patterns and detect different student profiles. Barber et al. [1] uses a predictive analytic model to prevent students from failing in courses. They analyze several variables, such as grades, age, attendance and others, that can impede the student learning.Kahn et al. [7] present a long-term study using hierarchical cluster analysis, t-tests and Pearson correlation that identified seven behavior patterns of learners in online discussion forums based on their access. García-Solórzano et al. [6] introduce a new educational monitoring tool that helps tutors to monitor the development of the students. Unlike traditional monitoring systems, they propose a faceted browser visualization tool to facilitate the analysis of the student progress. Glass [8] provides a versatile visualization tool to enable the creation of additional visualizations of data collections. Essa et al. [4] utilize predictive models to identify learners academically at-risk. They present the problem with an interesting analogy to the patient-doctor workflow, where first they identify the problem, analyze the situation and then prescribe courses that are indicated to help the student to succeed. Siadaty et al.[13] present the Learn-B environment, a hub system that captures information about the users usage in different softwares and learning activities in their workplace and present to the user feedback to support future decisions, planning and accompanies them in the learning process. In the same way, McAuley et al. [9] propose a visual analytics to support organizational learning in online communities. They present their analysis through an adjacency matrix and an adjustable timeline that show the communication-actions of the users and is able to organize it into temporal patterns. Bramucci et al. [2] presents Sherpa an academic recommendation system to support students on making decisions. For instance, using the learner profiles they recommend courses or make interventions in case that students are at-risk. In the related work, we showed how different perspectives and the necessity of new tools and methods to make data available and help decision-makers. 6. CONCLUSION In this paper we presented the main features of the Cite4Me Web application. Cite4Me makes use of several data sources to provide information for users interested on scientific publications and its applications. Additionally, we provided a general framework on data discovery and correlated resources based on a constructed feature set, consisting of items extracted from reference datasets. It made possible for users, to search and relate resources from a dataset with other resources offered as Linked Data. For more information about the Cite4Me Web application refer to http://www.cite4me.com. 7. REFERENCES [1] R. Barber and M. Sharkey. Course correction: using analytics to predict course success. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 259–262, New York, NY, USA, 2012. ACM. [2] R. Bramucci and J. Gaston. Sherpa: increasing student success with a recommendation engine. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 82–83, New York, NY, USA, 2012. ACM. [3] G. Cobo, D. García-Solórzano, J. A. Morán, E. Santamaría, C. Monzo, and J. Melenchón. Using agglomerative hierarchical clustering to model learner participation profiles in online discussion forums. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 248–251, New York, NY, USA, 2012. ACM. [4] A. Essa and H. Ayad. Student success system: risk analytics and data visualization using ensembles of predictive models. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 158–161, New York, NY, USA, 2012. ACM. [5] E. Gabrilovich and S. Markovitch. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In Proc. of the 20th international joint conference on Artifical intelligence, IJCAI’07, pages 1606–1611, San Francisco, CA, USA, 2007. Morgan Kaufmann Pub. Inc. [6] D. García-Solórzano, G. Cobo, E. Santamaría, J. A. Morán, C. Monzo, and J. Melenchón. Educational monitoring tool based on faceted browsing and data portraits. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 170–178, New York, NY, USA, 2012. ACM. [7] T. M. Khan, F. Clear, and S. S. Sajadi. The relationship between educational performance and online access routines: analysis of students’ access to an online discussion forum. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 226–229, New York, NY, USA, 2012. ACM. [8] D. Leony, A. Pardo, L. de la Fuente Valentín, D. S. de Castro, and C. D. Kloos. Glass: a learning analytics visualization tool. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 162–163, New York, NY, USA, 2012. ACM. [9] J. McAuley, A. O’Connor, and D. Lewis. Exploring reflection in online communities. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 102–110, New York, NY, USA, 2012. ACM. [10] P. N. Mendes, M. Jakob, A. García-Silva, and C. Bizer. Dbpedia spotlight: shedding light on the web of documents. In Proc. of the 7th International Conference on Semantic Systems, I-Semantics ’11, pages 1–8, New York, NY, USA, 2011. ACM. [11] B. Pereira Nunes, S. Dietze, M. A. Casanova, R. Kawase, B. Fetahu, and W. Nejdl. Combining a co-occurrence-based and a semantic measure for entity linking. In ESWC, 2013 (to appear). [12] B. Pereira Nunes, R. Kawase, S. Dietze, D. Taibi, M. A. Casanova, and W. Nejdl. Can entities be friends? In G. Rizzo, P. Mendes, E. Charton, S. Hellmann, and A. Kalyanpur, editors, Proc. of the Web of Linked Entities Workshop in conjuction with the 11th International Semantic Web Conference, volume 906 of CEUR-WS.org, pages 45–57, Nov. 2012. [13] M. Siadaty, D. Gašević, J. Jovanović, N. Milikić, Z. Jeremić, L. Ali, A. Giljanović, and M. Hatala. Learn-b: a social analytics-enabled tool for self-regulated workplace learning. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 115–119, New York, NY, USA, 2012. ACM. [14] C. van Rijsbergen, S. Robertson, and M. Porter. New models in probabilistic information retrieval. 1980.", "title": "" }, { "docid": "cf26c4f612a23ec26b284a6b243de7f4", "text": "Grit-perseverance and passion for long-term goals-has been shown to be a significant predictor of academic success, even after controlling for other personality factors. Here, for the first time, we use a U.K.-representative sample and a genetically sensitive design to unpack the etiology of Grit and its prediction of academic achievement in comparison to well-established personality traits. For 4,642 16-year-olds (2,321 twin pairs), we used the Grit-S scale (perseverance of effort and consistency of interest), along with the Big Five personality traits, to predict grades on the General Certificate of Secondary Education (GCSE) exams, which are administered U.K.-wide at the end of compulsory education. Twin analyses of Grit perseverance yielded a heritability estimate of 37% (20% for consistency of interest) and no evidence for shared environmental influence. Personality, primarily conscientiousness, predicts about 6% of the variance in GCSE grades, but Grit adds little to this prediction. Moreover, multivariate twin analyses showed that roughly two-thirds of the GCSE prediction is mediated genetically. Grit perseverance of effort and Big Five conscientiousness are to a large extent the same trait both phenotypically (r = 0.53) and genetically (genetic correlation = 0.86). We conclude that the etiology of Grit is highly similar to other personality traits, not only in showing substantial genetic influence but also in showing no influence of shared environmental factors. Personality significantly predicts academic achievement, but Grit adds little phenotypically or genetically to the prediction of academic achievement beyond traditional personality factors, especially conscientiousness. (PsycINFO Database Record", "title": "" }, { "docid": "997993e389cdb1e40714e20b96927890", "text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.", "title": "" }, { "docid": "80947cea68851bc522d5ebf8a74e28ab", "text": "Advertising is key to the business model of many online services. Personalization aims to make ads more relevant for users and more effective for advertisers. However, relatively few studies into user attitudes towards personalized ads are available. We present a San Francisco Bay Area survey (N=296) and in-depth interviews (N=24) with teens and adults. People are divided and often either (strongly) agreed or disagreed about utility or invasiveness of personalized ads and associated data collection. Mobile ads were reported to be less relevant than those on desktop. Participants explained ad personalization based on their personal previous behaviors and guesses about demographic targeting. We describe both metrics improvements as well as opportunities for improving online advertising by focusing on positive ad interactions reported by our participants, such as personalization focused not just on product categories but specific brands and styles, awareness of life events, and situations in which ads were useful or even inspirational.", "title": "" }, { "docid": "1aaacf3d7d6311a118581d836f78d142", "text": "One of the most powerful features of SQL is the use of nested queries. Most research work on the optimization of nested queries focuses on aggregate subqueries. However, the solutions proposed for non-aggregate subqueries are still limited, especially for queries having multiple subqueries and null values. In this paper, we show that existing approaches to queries containing non-aggregate subqueries proposed in the literature (including rewrites) are not adequate. We then propose a new efficient approach, the nested relational approach, based on the nested relational algebra. Our approach directly unnests non-aggregate subqueries using hash joins, and treats all subqueries in a uniform manner, being able to deal with nested queries of any type and any level. We report on experimental work that confirms that existing approaches have difficulties dealing with non-aggregate subqueries, and that our approach offers better performance. We also discuss some possibilities for algebraic optimization and the issue of integrating our approach in a relational database system.", "title": "" }, { "docid": "c863d82ae2b56202d333ffa5bef5dd59", "text": "We present an algorithm for finding landmarks along a manifold. These landmarks provide a small set of locations spaced out along the manifold such that they capture the low-dimensional nonlinear structure of the data embedded in the high-dimensional space. The approach does not select points directly from the dataset, but instead we optimize each landmark by moving along the continuous manifold space (as approximated by the data) according to the gradient of an objective function. We borrow ideas from active learning with Gaussian processes to define the objective, which has the property that a new landmark is “repelled” by those currently selected, allowing for exploration of the manifold. We derive a stochastic algorithm for learning with large datasets and show results on several datasets, including the Million Song Dataset and articles from the New York Times.", "title": "" }, { "docid": "288377464cc80eef5c669e5821e3b2b3", "text": "For a long time, the human genome was considered an intrinsically stable entity; however, it is currently known that our human genome contains many unstable elements consisting of tandem repeat elements, mainly Short tandem repeats (STR), also known as microsatellites or Simple sequence repeats (SSR) (Ellegren, 2000). These sequences involve a repetitive unit of 1-6 bp, forming series with lengths from two to several thousand nucleotides. STR are widely found in proand eukaryotes, including humans. They appear scattered more or less evenly throughout the human genome, accounting for ca. 3% of the entire genome (Sharma et al., 2007). STR are polymorphic but stable in general population; however, repeats can become unstable during DNA replication, resulting in mitotic or meiotic contractions or expansions. STR instability is an important and unique form of mutation that is linked to >40 neurological, neurodegenerative, and neuromuscular disorders (Pearson et al., 2005). In particular, abnormal expansion of trinucleotide repeats (CTG)n, (CGG)n, (CCG)n, (GAA)n, and (CAG)n have been associated with different diseases such as fragile X syndrome, Huntington disease (HD), Dentatorubral-pallidoluysian atrophy (DRPLA), Friedreich ataxia (FA), diverse Spinocerebellar ataxias (SCA), and Myotonic dystrophy type 1 (DM1).", "title": "" }, { "docid": "90b913e3857625f3237ff7a47f675fbb", "text": "A new approach for the design of UWB hairpin-comb filters is presented. The filters can be designed to possess broad upper stopband characteristics by controlling the overall size of their resonators. The measured frequency characteristics of implemented UWB filters show potential first spurious passbands centered at about six times the fundamental passband center frequencies.", "title": "" }, { "docid": "f9c37f460fc0a4e7af577ab2cbe7045b", "text": "Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults.", "title": "" }, { "docid": "bac5b36d7da7199c1bb4815fa0d5f7de", "text": "During quadrupedal trotting, diagonal pairs of limbs are set down in unison and exert forces on the ground simultaneously. Ground-reaction forces on individual limbs of trotting dogs were measured separately using a series of four force platforms. Vertical and fore-aft impulses were determined for each limb from the force/time recordings. When mean fore-aft acceleration of the body was zero in a given trotting step (steady state), the fraction of vertical impulse on the forelimb was equal to the fraction of body weight supported by the forelimbs during standing (approximately 60 %). When dogs accelerated or decelerated during a trotting step, the vertical impulse was redistributed to the hindlimb or forelimb, respectively. This redistribution of the vertical impulse is due to a moment exerted about the pitch axis of the body by fore-aft accelerating and decelerating forces. Vertical forces exerted by the forelimb and hindlimb resist this pitching moment, providing stability during fore-aft acceleration and deceleration.", "title": "" }, { "docid": "5eb1aa594c3c6210f029b5bbf6acc599", "text": "Intestinal nematodes affecting dogs, i.e. roundworms, hookworms and whipworms, have a relevant health-risk impact for animals and, for most of them, for human beings. Both dogs and humans are typically infected by ingesting infective stages, (i.e. larvated eggs or larvae) present in the environment. The existence of a high rate of soil and grass contamination with infective parasitic elements has been demonstrated worldwide in leisure, recreational, public and urban areas, i.e. parks, green areas, bicycle paths, city squares, playgrounds, sandpits, beaches. This review discusses the epidemiological and sanitary importance of faecal pollution with canine intestinal parasites in urban environments and the integrated approaches useful to minimize the risk of infection in different settings.", "title": "" }, { "docid": "b52f9f47b972e797f11029111f5200b3", "text": "Sentiment lexicons have been leveraged as a useful source of features for sentiment analysis models, leading to the state-of-the-art accuracies. On the other hand, most existing methods use sentiment lexicons without considering context, typically taking the count, sum of strength, or maximum sentiment scores over the whole input. We propose a context-sensitive lexicon-based method based on a simple weighted-sum model, using a recurrent neural network to learn the sentiments strength, intensification and negation of lexicon sentiments in composing the sentiment value of sentences. Results show that our model can not only learn such operation details, but also give significant improvements over state-of-the-art recurrent neural network baselines without lexical features, achieving the best results on a Twitter benchmark.", "title": "" }, { "docid": "472f59fd9017e3c03650619c4f0201f3", "text": "Software Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention from academia and industry. However, the centralized nature of SDN is a potential vulnerability to the system since attackers may launch denial of services (DoS) attacks against the controller. Existing solutions limit requests rate to the controller by dropping overflowed requests, but they also drop legitimate requests to the controller. To address this problem, we propose FlowRanger, a buffer prioritizing solution for controllers to handle routing requests based on their likelihood to be attacking requests, which derives the trust values of the requesting sources. Based on their trust values, FlowRanger classifies routing requests into multiple buffer queues with different priorities. Thus, attacking requests are served with a lower priority than regular requests. Our simulation results demonstrates that FlowRanger can significantly enhance the request serving rate of regular users under DoS attacks against the controller. To the best of our knowledge, our work is the first solution to battle against controller DoS attacks on the controller side.", "title": "" }, { "docid": "1967de1be0b095b4a59a5bb0fdc403c0", "text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.", "title": "" }, { "docid": "8877d6753d6b7cd39ba36c074ca56b00", "text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "title": "" }, { "docid": "6dd1df4e520f5858d48db9860efb63a7", "text": "This paper proposes single-phase direct pulsewidth modulation (PWM) buck-, boost-, and buck-boost-type ac-ac converters. The proposed converters are implemented with a series-connected freewheeling diode and MOSFET pair, which allows to minimize the switching and conduction losses of the semiconductor devices and resolves the reverse-recovery problem of body diode of MOSFET. The proposed converters are highly reliable because they can solve the shoot-through and dead-time problems of traditional ac-ac converters without voltage/current sensing module, lossy resistor-capacitor (RC) snubbers, or bulky coupled inductors. In addition, they can achieve high obtainable voltage gain and also produce output voltage waveforms of good quality because they do not use lossy snubbers. Unlike the recently developed switching cell (SC) ac-ac converters, the proposed ac-ac converters have no circulating current and do not require bulky coupled inductors; therefore, the total losses, current stresses, and magnetic volume are reduced and efficiency is improved. Detailed analysis and experimental results are provided to validate the novelty and merit of the proposed converters.", "title": "" }, { "docid": "becbcb6ca7ac87a3e43dbc65748b258a", "text": "We present Mean Box Pooling, a novel visual representation that pools over CNN representations of a large number, highly overlapping object proposals. We show that such representation together with nCCA, a successful multimodal embedding technique, achieves state-of-the-art performance on the Visual Madlibs task. Moreover, inspired by the nCCA’s objective function, we extend classical CNN+LSTM approach to train the network by directly maximizing the similarity between the internal representation of the deep learning architecture and candidate answers. Again, such approach achieves a significant improvement over the prior work that also uses CNN+LSTM approach on Visual Madlibs.", "title": "" } ]
scidocsrr
76a79f93307b188952b2fe5e0210b0fe
I want to answer; who has a question?: Yahoo! answers recommender system
[ { "docid": "e870f2fe9a26b241bdeca882b6186169", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.", "title": "" } ]
[ { "docid": "e6ff5af0a9d6105a60771a2c447fab5e", "text": "Object detection and classification in 3D is a key task in Automated Driving (AD). LiDAR sensors are employed to provide the 3D point cloud reconstruction of the surrounding environment, while the task of 3D object bounding box detection in real time remains a strong algorithmic challenge. In this paper, we build on the success of the oneshot regression meta-architecture in the 2D perspective image space and extend it to generate oriented 3D object bounding boxes from LiDAR point cloud. Our main contribution is in extending the loss function of YOLO v2 to include the yaw angle, the 3D box center in Cartesian coordinates and the height of the box as a direct regression problem. This formulation enables real-time performance, which is essential for automated driving. Our results are showing promising figures on KITTI benchmark, achieving real-time performance (40 fps) on Titan X GPU.", "title": "" }, { "docid": "22b259233ffe842e91347792bd7b48e0", "text": "The increase of the complexity and advancement in ecological and environmental sciences encourages scientists across the world to collect data from multiple places, times, and thematic scales to verify their hypotheses. Accumulated over time, such data not only increases in amount, but also in the diversity of the data sources spread around the world. This poses a huge challenge for scientists who have to manually search for information. To alleviate such problems, ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple repositories and makes it searchable. However, harvested metadata records sometimes are poorly annotated or lacking meaningful keywords, which could affect effective retrieval. Here, we develop algorithms for automatic annotation of metadata. We transform the problem into a tag recommendation problem with a controlled tag library, and propose two variants of an algorithm for recommending tags. Our experiments on four datasets of environmental science metadata records not only show great promises on the performance of our method, but also shed light on the different natures of the datasets.", "title": "" }, { "docid": "c366303728d2a8ee47fe4cbfe67dec24", "text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.", "title": "" }, { "docid": "5afe9c613da51904d498b282fb1b62df", "text": "Two types of suspended stripline ultra-wideband bandpass filters are described, one based on a standard lumped element (L-C) filter concept including transmission zeroes to improve the upper passband slope, and a second one consisting of the combination of a low-pass and a high-pass filter.", "title": "" }, { "docid": "e447a0129f01a096f03b16c2ee16c888", "text": "Many authors use feedforward neural networks for modeling and forecasting time series. Most of these applications are mainly experimental, and it is often difficult to extract a general methodology from the published studies. In particular, the choice of architecture is a tricky problem. We try to combine the statistical techniques of linear and nonlinear time series with the connectionist approach. The asymptotical properties of the estimators lead us to propose a systematic methodology to determine which weights are nonsignificant and to eliminate them to simplify the architecture. This method (SSM or statistical stepwise method) is compared to other pruning techniques and is applied to some artificial series, to the famous Sunspots benchmark, and to daily electrical consumption data.", "title": "" }, { "docid": "884ee23f40ad31f7010f9486b74d9433", "text": "A streamlined parallel traffic management system (PtMS) is outlined that works alongside a redesigned intelligent transportation system in Qingdao, China. The PtMS's structure provides enhanced control and management support, with increased versatility for use in real-world scenarios.", "title": "" }, { "docid": "4dc8b11b9123c6a25dcf4765d77cb6ca", "text": "Accurate and reliable information about land use and land cover is essential for change detection and monitoring of the specified area. It is also useful in the updating the geographical information about the area. Over the past decade, a significant amount of research has been conducted concerning the application of different classifier and image fusion technique in this area. In this paper, introductions to the land use and land cover classification techniques are given and the results from a number of different techniques are compared. It has been found that, in general fusion technique perform better than either conventional classifier or supervised/unsupervised classification.", "title": "" }, { "docid": "83856fb0a5e53c958473fdf878b89b20", "text": "Due to the expensive nature of an industrial robot, not all universities are equipped with areal robots for students to operate. Learning robotics without accessing to an actual robotic system has proven to be difficult for undergraduate students. For instructors, it is also an obstacle to effectively teach fundamental robotic concepts. Virtual robot simulator has been explored by many researchers to create a virtual environment for teaching and learning. This paper presents structure of a course project which requires students to develop a virtual robot simulator. The simulator integrates concept of kinematics, inverse kinematics and controls. Results show that this approach assists and promotes better students‟ understanding of robotics.", "title": "" }, { "docid": "865cfae2da5ad3d1d10d21b1defdc448", "text": "During the last decade, novel immunotherapeutic strategies, in particular antibodies directed against immune checkpoint inhibitors, have revolutionized the treatment of different malignancies leading to an improved survival of patients. Identification of immune-related biomarkers for diagnosis, prognosis, monitoring of immune responses and selection of patients for specific cancer immunotherapies is urgently required and therefore areas of intensive research. Easily accessible samples in particular liquid biopsies (body fluids), such as blood, saliva or urine, are preferred for serial tumor biopsies.Although monitoring of immune and tumor responses prior, during and post immunotherapy has led to significant advances of patients' outcome, valid and stable prognostic biomarkers are still missing. This might be due to the limited capacity of the technologies employed, reproducibility of results as well as assay stability and validation of results. Therefore solid approaches to assess immune regulation and modulation as well as to follow up the nature of the tumor in liquid biopsies are urgently required to discover valuable and relevant biomarkers including sample preparation, timing of the collection and the type of liquid samples. This article summarizes our knowledge of the well-known liquid material in a new context as liquid biopsy and focuses on collection and assay requirements for the analysis and the technical developments that allow the implementation of different high-throughput assays to detect alterations at the genetic and immunologic level, which could be used for monitoring treatment efficiency, acquired therapy resistance mechanisms and the prognostic value of the liquid biopsies.", "title": "" }, { "docid": "75f895ff76e7a55d589ff30637524756", "text": "This paper details the coreference resolution system submitted by Stanford at the CoNLL2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.", "title": "" }, { "docid": "866c1e87076da5a94b9adeacb9091ea3", "text": "Training a support vector machine (SVM) is usually done by ma pping the underlying optimization problem into a quadratic progr amming (QP) problem. Unfortunately, high quality QP solvers are not rea dily available, which makes research into the area of SVMs difficult for he those without a QP solver. Recently, the Sequential Minimal Optim ization algorithm (SMO) was introduced [1, 2]. SMO reduces SVM trainin g down to a series of smaller QP subproblems that have an analytical solution and, therefore, does not require a general QP solver. SMO has been shown to be very efficient for classification problems using l ear SVMs and/or sparse data sets. This work shows how SMO can be genera lized to handle regression problems.", "title": "" }, { "docid": "0c0b099a2a4a404632a1f065cfa328c4", "text": "Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms—Forest (pyQuil), QISKit, ProjectQ, and the Quantum Developer Kit—that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.", "title": "" }, { "docid": "d7ea5e0bdf811f427b7c283d4aae7371", "text": "This work investigates the development of students’ computational thinking (CT) skills in the context of educational robotics (ER) learning activity. The study employs an appropriate CT model for operationalising and exploring students’ CT skills development in two different age groups (15 and 18 years old) and across gender. 164 students of different education levels (Junior high: 89; High vocational: 75) engaged in ER learning activities (2 hours per week, 11 weeks totally) and their CT skills were evaluated at different phases during the activity, using different modality (written and oral) assessment tools. The results suggest that: (a) students reach eventually the same level of CT skills development independent of their age and gender, (b) CT skills inmost cases need time to fully develop (students’ scores improve significantly towards the end of the activity), (c) age and gender relevant differences appear when analysing students’ score in the various specific dimensions of the CT skills model, (d) the modality of the skill assessment instrumentmay have an impact on students’ performance, (e) girls appear inmany situations to need more training time to reach the same skill level compared to boys. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f53a2ca0fda368d0e90cbb38076658af", "text": "RNAi therapeutics is a powerful tool for treating diseases by sequence-specific targeting of genes using siRNA. Since its discovery, the need for a safe and efficient delivery system for siRNA has increased. Here, we have developed and characterized a delivery platform for siRNA based on the natural polysaccharide starch in an attempt to address unresolved delivery challenges of RNAi. Modified potato starch (Q-starch) was successfully obtained by substitution with quaternary reagent, providing Q-starch with cationic properties. The results indicate that Q-starch was able to bind siRNA by self-assembly formation of complexes. For efficient and potent gene silencing we monitored the physical characteristics of the formed nanoparticles at increasing N/P molar ratios. The minimum ratio for complete entrapment of siRNA was 2. The resulting complexes, which were characterized by a small diameter (~30 nm) and positive surface charge, were able to protect siRNA from enzymatic degradation. Q-starch/siRNA complexes efficiently induced P-glycoprotein (P-gp) gene silencing in the human ovarian adenocarcinoma cell line, NCI-ADR/Res (NAR), over expressing the targeted gene and presenting low toxicity. Additionally, Q-starch-based complexes showed high cellular uptake during a 24-hour study, which also suggested that intracellular siRNA delivery barriers governed the kinetics of siRNA transfection. In this study, we have devised a promising siRNA delivery vector based on a starch derivative for efficient and safe RNAi application.", "title": "" }, { "docid": "7b5df73b6fb0574bd7c039da53047724", "text": "Many ad hoc network protocols and applications assume the knowledge of geographic location of nodes. The absolute position of each networked node is an assumed fact by most sensor networks which can then present the sensed information on a geographical map. Finding position without the aid of GPS in each node of an ad hoc network is important in cases where GPS is either not accessible, or not practical to use due to power, form factor or line of sight conditions. Position would also enable routing in sufficiently isotropic large networks, without the use of large routing tables. We are proposing APS – a localized, distributed, hop by hop positioning algorithm, that works as an extension of both distance vector routing and GPS positioning in order to provide approximate position for all nodes in a network where only a limited fraction of nodes have self positioning capability.", "title": "" }, { "docid": "e90c165a3e16035b56a4bb4ceb9282ed", "text": "Point of care testing (POCT) refers to laboratory testing that occurs near to the patient, often at the patient bedside. POCT can be advantageous in situations requiring rapid turnaround time of test results for clinical decision making. There are many challenges associated with POCT, mainly related to quality assurance. POCT is performed by clinical staff rather than laboratory trained individuals which can lead to errors resulting from a lack of understanding of the importance of quality control and quality assurance practices. POCT is usually more expensive than testing performed in the central laboratory and requires a significant amount of support from the laboratory to ensure the quality testing and meet accreditation requirements. Here, specific challenges related to POCT compliance with accreditation standards are discussed along with strategies that can be used to overcome these challenges. These areas include: documentation of POCT orders, charting of POCT results as well as training and certification of individuals performing POCT. Factors to consider when implementing connectivity between POCT instruments and the electronic medical record are also discussed in detail and include: uni-directional versus bidirectional communication, linking patient demographic information with POCT software, the importance of positive patient identification and considering where to chart POCT results in the electronic medical record.", "title": "" }, { "docid": "6cac6ab24b5e833e73c98db476e1437d", "text": "The observation that a particular drug state may acquire the properties of a discriminative stimulus is explicable on the basis of drug-induced interoceptive cues. The present investigation sought to determine (a) whether the hallucinogens mescaline and LSD could serve as discriminative stimuli when either drug is paired with saline and (b) whether discriminative responding would occur when the paired stimuli are produced by equivalent doses of LSD and mescaline. In a standard two-lever operant test chamber, rats received a reinforcer (sweetened milk) for correct responses according to a variable interval schedule. All sessions were preceded by one of two treatments; following treatment A, only responses on lever A were reinforced and, in a similar fashion, lever B was correct following treatment B. No responses were reinforced during the first five minutes of a daily thirty-minute session. It was found that mescaline and LSD can serve as discriminative stimuli when either drug is paired with saline and that the degree of discrimination varies with drug dose. When equivalent doses of the two drugs were given to the same animal, no discriminated responding was observed. The latter finding suggests that mescaline and LSD produce qualitatively similar interoceptive cues in the rat.", "title": "" }, { "docid": "4bce6150e9bc23716a19a0d7c02640c0", "text": "A Data Mining Framework for Constructing Features and Models for Intrusion Detection Systems", "title": "" }, { "docid": "9b3db8c2632ad79dc8e20435a81ef2a1", "text": "Social networks have changed the way information is delivered to the customers, shifting from traditional one-to-many to one-to-one communication. Opinion mining and sentiment analysis offer the possibility to understand the user-generated comments and explain how a certain product or a brand is perceived. Classification of different types of content is the first step towards understanding the conversation on the social media platforms. Our study analyses the content shared on Facebook in terms of topics, categories and shared sentiment for the domain of a sponsored Facebook brand page. Our results indicate that Product, Sales and Brand are the three most discussed topics, while Requests and Suggestions, Expressing Affect and Sharing are the most common intentions for participation. We discuss the implications of our findings for social media marketing and opinion mining.", "title": "" }, { "docid": "af1b98a3b40e8adc053ddafa49e44fd0", "text": "Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data. 1 peA and Feature Spaces Principal Component Analysis (PC A) (e.g. [3]) is an orthogonal basis transformation. The new basis is found by diagonalizing the centered covariance matrix of a data set {Xk E RNlk = 1, ... ,f}, defined by C = ((Xi (Xk))(Xi (Xk))T). The coordinates in the Eigenvector basis are called principal components. The size of an Eigenvalue >. corresponding to an Eigenvector v of C equals the amount of variance in the direction of v. Furthermore, the directions of the first n Eigenvectors corresponding to the biggest n Eigenvalues cover as much variance as possible by n orthogonal directions. In many applications they contain the most interesting information: for instance, in data compression, where we project onto the directions with biggest variance to retain as much information as possible, or in de-noising, where we deliberately drop directions with small variance. Clearly, one cannot assert that linear PCA will always detect all structure in a given data set. By the use of suitable nonlinear features, one can extract more information. Kernel PCA is very well suited to extract interesting nonlinear structures in the data [9]. The purpose of this work is therefore (i) to consider nonlinear de-noising based on Kernel PCA and (ii) to clarify the connection between feature space expansions and meaningful patterns in input space. Kernel PCA first maps the data into some feature space F via a (usually nonlinear) function <II and then performs linear PCA on the mapped data. As the feature space F might be very high dimensional (e.g. when mapping into the space of all possible d-th order monomials of input space), kernel PCA employs Mercer kernels instead of carrying Kernel peA and De-Noising in Feature Spaces 537 out the mapping <I> explicitly. A Mercer kernel is a function k(x, y) which for all data sets {Xi} gives rise to a positive matrix Kij = k(Xi' Xj) [6]. One can show that using k instead of a dot product in input space corresponds to mapping the data with some <I> to a feature space F [1], i.e. k(x,y) = (<I>(x) . <I>(y)). Kernels that have proven useful include Gaussian kernels k(x, y) = exp( -llx Yll2 Ie) and polynomial kernels k(x, y) = (x·y)d. Clearly, all algorithms that can be formulated in terms of dot products, e.g. Support Vector Machines [1], can be carried out in some feature space F without mapping the data explicitly. All these algorithms construct their solutions as expansions in the potentially infinite-dimensional feature space. The paper is organized as follows: in the next section, we briefly describe the kernel PCA algorithm. In section 3, we present an algorithm for finding approximate pre-images of expansions in feature space. Experimental results on toy and real world data are given in section 4, followed by a discussion of our findings (section 5). 2 Kernel peA and Reconstruction To perform PCA in feature space, we need to find Eigenvalues A > 0 and Eigenvectors V E F\\{O} satisfying AV = GV with G = (<I>(Xk)<I>(Xk)T).1 Substituting G into the Eigenvector equation, we note that all solutions V must lie in the span of <I>-images of the training data. This implies that we can consider the equivalent system A( <I>(Xk) . V) = (<I>(Xk) . GV) for all k = 1, ... ,f (1) and that there exist coefficients Q1 , ... ,Ql such that l V = L i=l Qi<l>(Xi) (2) Substituting C and (2) into (1), and defining an f x f matrix K by Kij := (<I>(Xi)· <I>(Xj)) = k( Xi, X j), we arrive at a problem which is cast in terms of dot products: solve", "title": "" } ]
scidocsrr
54739b925463523a5fa7e2294e6749a3
Ten years of a model of aesthetic appreciation and aesthetic judgments : The aesthetic episode - Developments and challenges in empirical aesthetics.
[ { "docid": "78c3573511176ba63e2cf727e09c7eb4", "text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "57fbb5bf0e7fe4b8be21fae87f027572", "text": "Android and iOS devices are leading the mobile device market. While various user experiences have been reported from the general user community about their differences, such as battery lifetime, display, and touchpad control, few in-depth reports can be found about their comparative performance when receiving the increasingly popular Internet streaming services. Today, video traffic starts to dominate the Internet mobile data traffic. In this work, focusing on Internet streaming accesses, we set to analyze and compare the performance when Android and iOS devices are accessing Internet streaming services. Starting from the analysis of a server-side workload collected from a top mobile streaming service provider, we find Android and iOS use different approaches to request media content, leading to different amounts of received traffic on Android and iOS devices when a same video clip is accessed. Further studies on the client side show that different data requesting approaches (standard HTTP request vs. HTTP range request) and different buffer management methods (static vs. dynamic) are used in Android and iOS mediaplayers, and their interplay has led to our observations. Our empirical results and analysis provide some insights for the current Android and iOS users, streaming service providers, and mobile mediaplayer developers.", "title": "" }, { "docid": "85f67ab0e1adad72bbe6417d67fd4c81", "text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.", "title": "" }, { "docid": "619c905f7ef5fa0314177b109e0ec0e6", "text": "The aim of this review is to systematically summarise qualitative evidence about work-based learning in health care organisations as experienced by nursing staff. Work-based learning is understood as informal learning that occurs inside the work community in the interaction between employees. Studies for this review were searched for in the CINAHL, PubMed, Scopus and ABI Inform ProQuest databases for the period 2000-2015. Nine original studies met the inclusion criteria. After the critical appraisal by two researchers, all nine studies were selected for the review. The findings of the original studies were aggregated, and four statements were prepared, to be utilised in clinical work and decision-making. The statements concerned the following issues: (1) the culture of the work community; (2) the physical structures, spaces and duties of the work unit; (3) management; and (4) interpersonal relations. Understanding the nurses' experiences of work-based learning and factors behind these experiences provides an opportunity to influence the challenges of learning in the demanding context of health care organisations.", "title": "" }, { "docid": "d135e72c317ea28a64a187b17541f773", "text": "Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and physiology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject.", "title": "" }, { "docid": "689f7aad97d36f71e43e843a331fcf5d", "text": "Dimension-reducing feature extraction neural network techniques which also preserve neighbourhood relationships in data have traditionally been the exclusive domain of Kohonen self organising maps. Recently, we introduced a novel dimension-reducing feature extraction process, which is also topographic, based upon a Radial Basis Function architecture. It has been observed that the generalisation performance of the system is broadly insensitive to model order complexity and other smoothing factors such as the kernel widths, contrary to intuition derived from supervised neural network models. In this paper we provide an effective demonstration of this property and give a theoretical justification for the apparent 'self-regularising' behaviour of the 'NEUROSCALE' architecture. 1 'NeuroScale': A Feed-forward Neural Network Topographic Transformation Recently an important class of topographic neural network based feature extraction approaches, which can be related to the traditional statistical methods of Sammon Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping, 1996). These novel alternatives to Kohonen-like approaches for topographic feature extraction possess several interesting properties. For instance, the NEuROSCALE architecture has the empirically observed property that the generalisation perfor544 D. Lowe and M. E. Tipping mance does not seem to depend critically on model order complexity, contrary to intuition based upon knowledge of its supervised counterparts. This paper presents evidence for their 'self-regularising' behaviour and provides an explanation in terms of the curvature of the trained models. We now provide a brief introduction to the NEUROSCALE philosophy of nonlinear topographic feature extraction. Further details may be found in (Lowe, 1993; Lowe and Tipping, 1996). We seek a dimension-reducing, topographic transformation of data for the purposes of visualisation and analysis. By 'topographic', we imply that the geometric structure of the data be optimally preserved in the transformation, and the embodiment of this constraint is that the inter-point distances in the feature space should correspond as closely as possible to those distances in the data space. The implementation of this principle by a neural network is very simple. A Radial Basis Function (RBF) neural network is utilised to predict the coordinates of the data point in the transformed feature space. The locations of the feature points are indirectly determined by adjusting the weights of the network. The transformation is determined by optimising the network parameters in order to minimise a suitable error measure that embodies the topographic principle. The specific details of this alternative approach are as follows. Given an mdimensional input space of N data points x q , an n-dimensional feature space of points Yq is generated such that the relative positions of the feature space points minimise the error, or 'STRESS', term: N E = 2: 2:(d~p dqp )2, (1) p q>p where the d~p are the inter-point Euclidean distances in the data space: d~p = J(xq Xp)T(Xq xp), and the dqp are the corresponding distances in the feature space: dqp = J(Yq Yp)T(Yq Yp)· The points yare generated by the RBF, given the data points as input. That is, Yq = f(xq;W), where f is the nonlinear transformation effected by the RBF with parameters (weights and any kernel smoothing factors) W. The distances in the feature space may thus be given by dqp =11 f(xq) f(xp) \" and so more explicitly by", "title": "" }, { "docid": "5e240ad1d257a90c0ca414ce8e7e0949", "text": "Improving Cloud Security using Secure Enclaves by Jethro Gideon Beekman Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor David Wagner, Chair Internet services can provide a wealth of functionality, yet their usage raises privacy, security and integrity concerns for users. This is caused by a lack of guarantees about what is happening on the server side. As a worst case scenario, the service might be subjected to an insider attack. This dissertation describes the unalterable secure service concept for trustworthy cloud computing. Secure services are a powerful abstraction that enables viewing the cloud as a true extension of local computing resources. Secure services combine the security benefits one gets locally with the manageability and availability of the distributed cloud. Secure services are implemented using secure enclaves. Remote attestation of the server is used to obtain guarantees about the programming of the service. This dissertation addresses concerns related to using secure enclaves such as providing data freshness and distributing identity information. Certificate Transparency is augmented to distribute information about which services exist and what they do. All combined, this creates a platform that allows legacy clients to obtain security guarantees about Internet services.", "title": "" }, { "docid": "4d040791f63af5e2ff13ff2b705dc376", "text": "The frequency and severity of forest fires, coupled with changes in spatial and temporal precipitation and temperature patterns, are likely to severely affect the characteristics of forest and permafrost patterns in boreal eco-regions. Forest fires, however, are also an ecological factor in how forest ecosystems form and function, as they affect the rate and characteristics of tree recruitment. A better understanding of fire regimes and forest recovery patterns in different environmental and climatic conditions will improve the management of sustainable forests by facilitating the process of forest resilience. Remote sensing has been identified as an effective tool for preventing and monitoring forest fires, as well as being a potential tool for understanding how forest ecosystems respond to them. However, a number of challenges remain before remote sensing practitioners will be able to better understand the effects of forest fires and how vegetation responds afterward. This article attempts to provide a comprehensive review of current research with respect to remotely sensed data and methods used to model post-fire effects and forest recovery patterns in boreal forest regions. The review reveals that remote sensing-based monitoring of post-fire effects and forest recovery patterns in boreal forest regions is not only limited by the gaps in both field data and remotely sensed data, but also the complexity of far-northern fire regimes, climatic conditions and environmental conditions. We expect that the integration of different remotely sensed data coupled with field campaigns can provide an important data source to support the monitoring of post-fire effects and forest recovery patterns. Additionally, the variation and stratification of preand post-fire vegetation and environmental conditions should be considered to achieve a reasonable, operational model for monitoring post-fire effects and forest patterns in boreal regions. OPEN ACCESS Remote Sens. 2014, 6 471", "title": "" }, { "docid": "807e008d5c7339706f8cfe71e9ced7ba", "text": "Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future", "title": "" }, { "docid": "4ed74450320dfef4156013292c1d2cbb", "text": "This paper describes the decisions by which teh Association for Computing Machinery integrated good features from the Los Alamos e-print (physics) archive and from Cornell University's Networked Computer Science Technical Reference Library to form their own open, permanent, online “computing research repository” (CoRR). Submitted papers are not refereed and anyone can browse and extract CoRR material for free, so Corr's eventual success could revolutionize computer science publishing. But several serious challenges remain: some journals forbid online preprints, teh CoRR user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain.", "title": "" }, { "docid": "0105070bd23400083850627b1603af0b", "text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.", "title": "" }, { "docid": "e3299737a0fb3cd3c9433f462565b278", "text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.", "title": "" }, { "docid": "c87cc578b4a74bae4ea1e0d0d68a6038", "text": "Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand/finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements.", "title": "" }, { "docid": "505aff71acf5469dc718b8168de3e311", "text": "We propose two suffix array inspired full-text indexes. One, called SAhash, augments the suffix array with a hash table to speed up pattern searches due to significantly narrowed search interval before the binary search phase. The other, called FBCSA, is a compact data structure, similar to Mäkinen’s compact suffix array, but working on fixed sized blocks. Experiments on the Pizza & Chili 200MB datasets show that SA-hash is about 2–3 times faster in pattern searches (counts) than the standard suffix array, for the price of requiring 0.2n− 1.1n bytes of extra space, where n is the text length, and setting a minimum pattern length. FBCSA is relatively fast in single cell accesses (a few times faster than related indexes at about the same or better compression), but not competitive if many consecutive cells are to be extracted. Still, for the task of extracting, e.g., 10 successive cells its time-space relation remains attractive.", "title": "" }, { "docid": "efd2843175ad0b860ad1607f337addc5", "text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.", "title": "" }, { "docid": "ab15d55e8308843c526aed0c32db1cb2", "text": "ix Chapter 1: Introduction 1 1.1 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Human-Robot Communication . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 2: Background and Related Work 11 2.1 Manual Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Task-Level Robot Control . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Learning from Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Demonstration Approaches . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Policy Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3: Learning from Demonstration 19 3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Role of the Instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Role of the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Human-Robot Communication . . . . . . . . . . . . . . . . . . . 24 3.4.2 System Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.5 Learning a Task Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30", "title": "" }, { "docid": "e5eb79b313dad91de1144cd0098cde15", "text": "Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.", "title": "" }, { "docid": "f18833c40f6b15bb588eec3bbe52cdd4", "text": "Presented here is a cladistic analysis of the South American and some North American Camelidae. This analysis shows that Camelini and Lamini are monophyletic groups, as are the genera Palaeolama and Vicugna, while Hemiauchenia and Lama are paraphyletic. Some aspects of the migration and distribution of South American camelids are also discussed, confirming in part the propositions of other authors. According to the cladistic analysis and previous propositions, it is possible to infer that two Camelidae migration events occurred in America. In the first one, Hemiauchenia arrived in South America and, this was related to the speciation processes that originated Lama and Vicugna. In the second event, Palaeolama migrated from North America to the northern portion of South America. It is evident that there is a need for larger studies about fossil Camelidae, mainly regarding older ages and from the South American austral region. This is important to better undertand the geographic and temporal distribution of Camelidae and, thus, the biogeographic aspects after the Great American Biotic Interchange.", "title": "" }, { "docid": "de061c5692bf11876c03b9b5e7c944a0", "text": "The purpose of this article is to summarize several change theories and assumptions about the nature of change. The author shows how successful change can be encouraged and facilitated for long-term success. The article compares the characteristics of Lewin’s Three-Step Change Theory, Lippitt’s Phases of Change Theory, Prochaska and DiClemente’s Change Theory, Social Cognitive Theory, and the Theory of Reasoned Action and Planned Behavior to one another. Leading industry experts will need to continually review and provide new information relative to the change process and to our evolving society and culture. here are many change theories and some of the most widely recognized are briefly summarized in this article. The theories serve as a testimony to the fact that change is a real phenomenon. It can be observed and analyzed through various steps or phases. The theories have been conceptualized to answer the question, “How does successful change happen?” Lewin’s Three-Step Change Theory Kurt Lewin (1951) introduced the three-step change model. This social scientist views behavior as a dynamic balance of forces working in opposing directions. Driving forces facilitate change because they push employees in the desired direction. Restraining forces hinder change because they push employees in the opposite direction. Therefore, these forces must be analyzed and Lewin’s three-step model can help shift the balance in the direction of the planned change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). T INTERNATIONAL JOURNAL OF MNAGEMENT, BUSINESS, AND ADMINISTRATION 2_____________________________________________________________________________________ According to Lewin, the first step in the process of changing behavior is to unfreeze the existing situation or status quo. The status quo is considered the equilibrium state. Unfreezing is necessary to overcome the strains of individual resistance and group conformity. Unfreezing can be achieved by the use of three methods. First, increase the driving forces that direct behavior away from the existing situation or status quo. Second, decrease the restraining forces that negatively affect the movement from the existing equilibrium. Third, find a combination of the two methods listed above. Some activities that can assist in the unfreezing step include: motivate participants by preparing them for change, build trust and recognition for the need to change, and actively participate in recognizing problems and brainstorming solutions within a group (Robbins 564-65). Lewin’s second step in the process of changing behavior is movement. In this step, it is necessary to move the target system to a new level of equilibrium. Three actions that can assist in the movement step include: persuading employees to agree that the status quo is not beneficial to them and encouraging them to view the problem from a fresh perspective, work together on a quest for new, relevant information, and connect the views of the group to well-respected, powerful leaders that also support the change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). The third step of Lewin’s three-step change model is refreezing. This step needs to take place after the change has been implemented in order for it to be sustained or “stick” over time. It is high likely that the change will be short lived and the employees will revert to their old equilibrium (behaviors) if this step is not taken. It is the actual integration of the new values into the community values and traditions. The purpose of refreezing is to stabilize the new equilibrium resulting from the change by balancing both the driving and restraining forces. One action that can be used to implement Lewin’s third step is to reinforce new patterns and institutionalize them through formal and informal mechanisms including policies and procedures (Robbins 564-65). Therefore, Lewin’s model illustrates the effects of forces that either promote or inhibit change. Specifically, driving forces promote change while restraining forces oppose change. Hence, change will occur when the combined strength of one force is greater than the combined strength of the opposing set of forces (Robbins 564-65). Lippitt’s Phases of Change Theory Lippitt, Watson, and Westley (1958) extend Lewin’s Three-Step Change Theory. Lippitt, Watson, and Westley created a seven-step theory that focuses more on the role and responsibility of the change agent than on the evolution of the change itself. Information is continuously exchanged throughout the process. The seven steps are:", "title": "" }, { "docid": "bda04f2eaee74979d7684681041e19bd", "text": "In March of 2016, Google DeepMind's AlphaGo, a computer Go-playing program, defeated the reigning human world champion Go player, 4-1, a feat far more impressive than previous victories by computer programs in chess (IBM's Deep Blue) and Jeopardy (IBM's Watson). The main engine behind the program combines machine learning approaches with a technique called Monte Carlo tree search. Current versions of Monte Carlo tree search used in Go-playing algorithms are based on a version developed for games that traces its roots back to the adaptive multi-stage sampling simulation optimization algorithm for estimating value functions in finite-horizon Markov decision processes (MDPs) introduced by Chang et al. (2005), which was the first use of Upper Confidence Bounds (UCBs) for Monte Carlo simulation-based solution of MDPs. We review the main ideas in UCB-based Monte Carlo tree search by connecting it to simulation optimization through the use of two simple examples: decision trees and tic-tac-toe.", "title": "" } ]
scidocsrr
1a07755c5e5301f6e4313eb427481d39
GlyphLens: View-Dependent Occlusion Management in the Interactive Glyph Visualization
[ { "docid": "116b5f129e780a99a1d78ec02a1fb092", "text": "We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.", "title": "" }, { "docid": "78b371e7df39a1ebbad64fdee7303573", "text": "This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.", "title": "" } ]
[ { "docid": "c668dd96bbb4247ad73b178a7ba1f921", "text": "Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and supportvector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotionrelated natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "42b705c2d8e6acbfe207dd86911b2494", "text": "OBJECTIVES\nWe reported the interim findings of a randomized controlled trial (RCT) to examine the effects of a mind body physical exercise (Tai Chi) on cognitive function in Chinese subjects at risk of cognitive decline.\n\n\nSUBJECTS\n389 Chinese older persons with either a Clinical Dementia Rating (CDR 0.5) or amnestic-MCI participated in an exercise program. The exercise intervention lasted for 1 year; 171 subjects were trained with 24 forms simplified Tai Chi (Intervention, I) and 218 were trained with stretching and toning exercise (Control, C). The exercise comprised of advised exercise sessions of at least three times per week.\n\n\nRESULTS\nAt 5th months (2 months after completion of training), both I and C subjects showed an improvement in global cognitive function, delayed recall and subjective cognitive complaints (paired t-tests, p < 0.05). Improvements in visual spans and CDR sum of boxes scores were observed in I group (paired t-tests, p < 0.001). Three (2.2%) and 21(10.8%) subjects from the I and C groups progressed to dementia (Pearson chi square = 8.71, OR = 5.34, 95% CI 1.56-18.29). Logistic regression analysis controlled for baseline group differences in education and cognitive function suggested I group was associated with stable CDR (OR = 0.14, 95%CI = 0.03-0.71, p = 0.02).\n\n\nCONCLUSIONS\nOur interim findings showed that Chinese style mind body (Tai Chi) exercise may offer specific benefits to cognition, potential clinical interests should be further explored with longer observation period.", "title": "" }, { "docid": "363c1ecd086043311f16b53b20778d51", "text": "One recent development of cultural globalization emerges in the convergence of taste in media consumption within geo-cultural regions, such as Latin American telenovelas, South Asian Bollywood films and East Asian trendy dramas. Originating in Japan, the so-called trendy dramas (or idol dramas) have created a craze for Japanese commodities in its neighboring countries (Ko, 2004). Following this Japanese model, Korea has also developed as a stronghold of regional exports, ranging from TV programs, movies and pop music to food, fashion and tourism. The fondness for all things Japanese and Korean in East Asia has been vividly captured by such buzz phrases as Japan-mania (hari in Chinese) and the Korean wave (hallyu in Korean and hanliu in Chinese). These two phenomena underscore how popular culture helps polish the image of a nation and thus strengthens its economic competitiveness in the global market. Consequently, nationbranding has become incorporated into the project of nation-building in light of globalization. However, Japan’s cultural spread and Korea’s cultural expansion in East Asia are often analysed from angles that are polar opposites. Scholars suggest that Japan-mania is initiated by the ardent consumers of receiving countries (Nakano, 2002), while the Korea wave is facilitated by the Korean state in order to boost its culture industry (Ryoo, 2008). Such claims are legitimate but neglect the analogues of these two phenomena. This article examines the parallel paths through which Japan-mania and the Korean wave penetrate into people’s everyday practices in Taiwan – arguably one of the first countries to be swept by these two trends. My aim is to illuminate the processes in which nation-branding is not only promoted by a nation as an international marketing strategy, but also appropriated by a receiving country as a pattern of consumption. Three seemingly contradictory arguments explain why cultural products ‘sell’ across national borders: cultural transparency, cultural difference and hybridization. First, cultural exports targeting the global market are rarely culturally specific so that they allow worldwide audiences to ‘project [into them] indigenous values, beliefs, rites, and rituals’ Media, Culture & Society 33(1) 3 –18 © The Author(s) 2011 Reprints and permission: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0163443710379670 mcs.sagepub.com", "title": "" }, { "docid": "e818b0a38d17a77cc6cfdee2761f12c4", "text": "In this paper, we present improved lane tracking using vehicle localization. Lane markers are detected using a bank of steerable filters, and lanes are tracked using Kalman filtering. On-road vehicle detection has been achieved using an active learning approach, and vehicles are tracked using a Condensation particle filter. While most state-of-the art lane tracking systems are not capable of performing in high-density traffic scenes, the proposed framework exploits robust vehicle tracking to allow for improved lane tracking in high density traffic. Experimental results demonstrate that lane tracking performance, robustness, and temporal response are significantly improved in the proposed framework, while also tracking vehicles, with minimal additional hardware requirements.", "title": "" }, { "docid": "1b450f4ccaf148dad9d97f4c4b1b78dd", "text": "Deep neural network models trained on large labeled datasets are the state-of-theart in a large variety of computer vision tasks. In many applications, however, labeled data is expensive to obtain or requires a time consuming manual annotation process. In contrast, unlabeled data is often abundant and available in large quantities. We present a principled framework to capitalize on unlabeled data by training deep generative models on both labeled and unlabeled data. We show that such a combination is beneficial because the unlabeled data acts as a data-driven form of regularization, allowing generative models trained on few labeled samples to reach the performance of fully-supervised generative models trained on much larger datasets. We call our method Hybrid VAE (H-VAE) as it contains both the generative and the discriminative parts. We validate H-VAE on three large-scale datasets of different modalities: two face datasets: (MultiPIE, CelebA) and a hand pose dataset (NYU Hand Pose). Our qualitative visualizations further support improvements achieved by using partial observations.", "title": "" }, { "docid": "1790c02ba32f15048da0f6f4d783aeda", "text": "In this paper, resource allocation for energy efficient communication in orthogonal frequency division multiple access (OFDMA) downlink networks with large numbers of base station (BS) antennas is studied. Assuming perfect channel state information at the transmitter (CSIT), the resource allocation algorithm design is modeled as a non-convex optimization problem for maximizing the energy efficiency of data transmission (bit/Joule delivered to the users), where the circuit power consumption and a minimum required data rate are taken into consideration. Subsequently, by exploiting the properties of fractional programming, an efficient iterative resource allocation algorithm is proposed to solve the problem. In particular, the power allocation, subcarrier allocation, and antenna allocation policies for each iteration are derived. Simulation results illustrate that the proposed iterative resource allocation algorithm converges in a small number of iterations and unveil the trade-off between energy efficiency and the number of antennas.", "title": "" }, { "docid": "0b97ba6017a7f94ed34330555095f69a", "text": "In response to stress, the brain activates several neuropeptide-secreting systems. This eventually leads to the release of adrenal corticosteroid hormones, which subsequently feed back on the brain and bind to two types of nuclear receptor that act as transcriptional regulators. By targeting many genes, corticosteroids function in a binary fashion, and serve as a master switch in the control of neuronal and network responses that underlie behavioural adaptation. In genetically predisposed individuals, an imbalance in this binary control mechanism can introduce a bias towards stress-related brain disease after adverse experiences. New candidate susceptibility genes that serve as markers for the prediction of vulnerable phenotypes are now being identified.", "title": "" }, { "docid": "3371fe8778b813360debc384040c510e", "text": "Medication non-adherence is a major concern in the healthcare industry and has led to increases in health risks and medical costs. For many neurological diseases, adherence to medication regimens can be assessed by observing movement patterns. However, physician observations are typically assessed based on visual inspection of movement and are limited to clinical testing procedures. Consequently, medication adherence is difficult to measure when patients are away from the clinical setting. The authors propose a data mining driven methodology that uses low cost, non-wearable multimodal sensors to model and predict patients' adherence to medication protocols, based on variations in their gait. The authors conduct a study involving Parkinson's disease patients that are \"on\" and \"off\" their medication in order to determine the statistical validity of the methodology. The data acquired can then be used to quantify patients' adherence while away from the clinic. Accordingly, this data-driven system may allow for early warnings regarding patient safety. Using whole-body movement data readings from the patients, the authors were able to discriminate between PD patients on and off medication, with accuracies greater than 97% for some patients using an individually customized model and accuracies of 78% for a generalized model containing multiple patient gait data. The proposed methodology and study demonstrate the potential and effectiveness of using low cost, non-wearable hardware and data mining models to monitor medication adherence outside of the traditional healthcare facility. These innovations may allow for cost effective, remote monitoring of treatment of neurological diseases.", "title": "" }, { "docid": "c01e3b06294f9e84bcc9d493990c6149", "text": "An integrated CMOS 60 GHz phased-array antenna module supporting symmetrical 32 TX/RX elements for wireless docking is described. Bidirectional architecture with shared blocks, mm-wave TR switch design with less than 1dB TX loss, and a full built in self test (BIST) circuits with 5deg and +/-1dB measurement accuracy of phase and power are presented. The RFIC size is 29mm2, consuming 1.2W/0.85W at TX and RX with a 29dBm EIRP at -19dB EVM and 10dB NF.", "title": "" }, { "docid": "568317c1f18c476de5029d0a1e91438e", "text": "Plant volatiles (PVs) are lipophilic molecules with high vapor pressure that serve various ecological roles. The synthesis of PVs involves the removal of hydrophilic moieties and oxidation/hydroxylation, reduction, methylation, and acylation reactions. Some PV biosynthetic enzymes produce multiple products from a single substrate or act on multiple substrates. Genes for PV biosynthesis evolve by duplication of genes that direct other aspects of plant metabolism; these duplicated genes then diverge from each other over time. Changes in the preferred substrate or resultant product of PV enzymes may occur through minimal changes of critical residues. Convergent evolution is often responsible for the ability of distally related species to synthesize the same volatile.", "title": "" }, { "docid": "2f17160c9f01aa779b1745a57e34e1aa", "text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.", "title": "" }, { "docid": "9cd025b1ae9fde7bf30852377eb11057", "text": "Lesch-Nyhan syndrome (LNS) is an X-linked recessive disorder resulting from a deficiency of the metabolic enzyme hypozanthine-guanine phosphoribosyltransferase (HPRT). This syndrome presents with abnormal metabolic and neurological manifestations including hyperuricemia, mental retardation*, spastic cerebral palsy (CP), dystonia, and self-mutilation. The mechanism behind the severe self-mutilating behavior exhibited by patients with LNS is unknown and remains one of the greatest obstacles in providing care to these patients. This report describes a 10-year-old male child with confirmed LNS who was treated for self-mutilation of his hands, tongue, and lips with repeated botulinum toxin A (BTX-A) injections into the bilateral masseters. Our findings suggest that treatment with BTX-A affects both the central and peripheral nervous systems, resulting in reduced self-abusive behavior in this patient.", "title": "" }, { "docid": "3e805d6724dc400d681b3b42393d5ebe", "text": "This paper introduces a framework for conducting and writing an effective literature review. The target audience for the framework includes information systems (IS) doctoral students, novice IS researchers, and other IS researchers who are constantly struggling with the development of an effective literature-based foundation for a proposed research. The proposed framework follows the systematic data processing approach comprised of three major stages: 1) inputs (literature gathering and screening), 2) processing (following Bloom’s Taxonomy), and 3) outputs (writing the literature review). This paper provides the rationale for developing a solid literature review including detailed instructions on how to conduct each stage of the process proposed. The paper concludes by providing arguments for the value of an effective literature review to IS research.", "title": "" }, { "docid": "5706b4955db81d04398fd6a64eb70c7c", "text": "The number of applications (or apps) in the Android Market exceeded 450,000 in 2012 with more than 11 billion total downloads. The necessity to fix bugs and add new features leads to frequent app updates. For each update, a full new version of the app is downloaded to the user's smart phone; this generates significant traffic in the network. We propose to use delta encoding algorithms and to download only the difference between two versions of an app. We implement delta encoding for Android using the bsdiff and bspatch tools and evaluate its performance. We show that app update traffic can be reduced by about 50%, this can lead to significant cost and energy savings.", "title": "" }, { "docid": "156b2c39337f4fe0847b49fa86dc094b", "text": "The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology.", "title": "" }, { "docid": "59c16bb2ec81dfb0e27ff47ccae0a169", "text": "A geometric dissection is a set of pieces which can be assembled in different ways to form distinct shapes. Dissections are used as recreational puzzles because it is striking when a single set of pieces can construct highly different forms. Existing techniques for creating dissections find pieces that reconstruct two input shapes exactly. Unfortunately, these methods only support simple, abstract shapes because an excessive number of pieces may be needed to reconstruct more complex, naturalistic shapes. We introduce a dissection design technique that supports such shapes by requiring that the pieces reconstruct the shapes only approximately. We find that, in most cases, a small number of pieces suffices to tightly approximate the input shapes. We frame the search for a viable dissection as a combinatorial optimization problem, where the goal is to search for the best approximation to the input shapes using a given number of pieces. We find a lower bound on the tightness of the approximation for a partial dissection solution, which allows us to prune the search space and makes the problem tractable. We demonstrate our approach on several challenging examples, showing that it can create dissections between shapes of significantly greater complexity than those supported by previous techniques.", "title": "" }, { "docid": "b8dfe30c07f0caf46b3fc59406dbf017", "text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.", "title": "" }, { "docid": "72b15b373785198624438cdd7e187a79", "text": "The technical debt metaphor is widely used to encapsulate numerous software quality problems. The metaphor is attractive to practitioners as it communicates to both technical and nontechnical audiences that if quality problems are not addressed, things may get worse. However, it is unclear whether there are practices that move this metaphor beyond a mere communication mechanism. Existing studies of technical debt have largely focused on code metrics and small surveys of developers. In this paper, we report on our survey of 1,831 participants, primarily software engineers and architects working in long-lived, software-intensive projects from three large organizations, and follow-up interviews of seven software engineers. We analyzed our data using both nonparametric statistics and qualitative text analysis. We found that architectural decisions are the most important source of technical debt. Furthermore, while respondents believe the metaphor is itself important for communication, existing tools are not currently helpful in managing the details. We use our results to motivate a technical debt timeline to focus management and tooling approaches.", "title": "" }, { "docid": "e3104e5311dee57067540869f8036ba9", "text": "Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users' real-world tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user's attention is focused on the phone. In fact, the speed and accuracy of bezel gestures did not appear to be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Bezel-initiated gestures offered the fastest performance, and mark-based gestures were the most accurate. Bezel-initiated marks therefore may offer a promising approach for mobile touch-screen interaction that is less demanding of the user's attention.", "title": "" }, { "docid": "fc6214a4b20dba903a1085bd1b6122e0", "text": "a r t i c l e i n f o Keywords: CRM technology use Marketing capability Customer-centric organizational culture Customer-centric management system Customer relationship management (CRM) technology has attracted significant attention from researchers and practitioners as a facilitator of organizational performance. Even though companies have made tremendous investments in CRM technology, empirical research offers inconsistent support that CRM technology enhances organizational performance. Given this equivocal effect and the increasing need for the generalization of CRM implementation research outside western context, the authors, using data from Korean companies, address the process concerning how CRM technology translates into business outcomes. The results highlight that marketing capability mediates the association between CRM technology use and performance. Moreover, a customer-centric organizational culture and management system facilitate CRM technology use. This study serves not only to clarify the mechanism between CRM technology use and organizational performance, but also to generalize the CRM results in the Korean context. In today's competitive business environment, the success of firm increasingly hinges on the ability to operate customer relationship management (CRM) that enables the development and implementation of more efficient and effective customer-focused strategies. Based on this belief, many companies have made enormous investment in CRM technology as a means to actualize CRM efficiently. Despite conceptual underpinnings of CRM technology and substantial financial implications , empirical research examining the CRM technology-performance link has met with equivocal results. Recent studies demonstrate that only 30% of the organizations introducing CRM technology achieved improvements in their organizational performance (Bull, 2003; Corner and Hinton, 2002). These conflicting findings hint at the potential influences of unexplored mediating or moderating factors and the need of further research on the mechanism by which CRM technology leads to improved business performance. Such inconsistent results of CRM technology implementation are not limited to western countries which most of previous CRM research originated from. Even though Korean companies have poured tremendous resources to CRM initiatives since 2000, they also cut down investment in CRM technology drastically due to disappointing returns (Knowledge Research Group, 2004). As a result, Korean companies are increasingly eager to corroborate the returns from investment in CRM. In the eastern culture like Korea that promotes holistic thinking focusing on the relationships between a focal object and overall context (Monga and John, 2007), CRM operates as a two-edged sword. Because eastern culture with holistic thinking tends to value existing relationship with firms or contact point persons …", "title": "" } ]
scidocsrr
e6d9e2a5c0b7e8479162cd5950411d19
Deinterleaving for radar warning receivers with missed pulse consideration
[ { "docid": "d59a63413ad3ca838178fc399fd6b0f3", "text": "In this study a mission data base based clustering approach that can be used in radar warning receivers for the purpose of deinterleaving is suggested. Cell based deinterleaving technique, which is widely used at the present time, utilizes the information of direction of arrival, frequency and pulse width. In this study, different from this approach used in the literature, frequency, direction of arrival and pulse amplitude parameters are utilized for deinterleaving. With this technique it is shown that accurate results can be obtained by simulation.", "title": "" } ]
[ { "docid": "8b22382f560edbd6776e080acd07fd7e", "text": "Emerging evidence suggests that people do not have difficulty judging covariation per se but rather have difficulty decoding standard displays such as scatterplots. Using the data analysis software Tinkerplots, I demonstrate various alternative representations that students appear to be able to use quite effectively to make judgments about covariation. More generally, I argue that data analysis instruction in K-12 should be structured according to how statistical reasoning develops in young students and should, for the time begin, not target specific graphical representations as objectives of instruction. TINKERPLOTS: SOFTWARE FOR THE MIDDLE SCHOOL The computer's potential to improve the teaching of data analysis is now a well-known It includes its power to illuminate key concepts through simulations and multiple-linked representations. It also includes its ability to free students up, at the appropriate time, from time-intensive tasks—from what National Council of Teachers of Mathematics (1989) Standards referred to as the \" narrow aspects of statistics \" (p. 113). This potentially allows instruction to focus more attention on the processes of data analysis—exploring substantive questions of interest, searching for and interpreting patterns and trends in data, and communicating findings. However, as Biehler (1995) has suggested, the younger the student, the more difficult it is to design an appropriate tool for learning statistics. Most of the existing tools for young students have been developed from the \" top down. \" They provide a subset of conventional plots and thus are simpler than professional tools only in that they have fewer options. These \" simplified professional tools \" are ill-suited to younger students who \" need a tool that is designed from their bottom-up perspective of statistical novices and can develop in various ways into a full professional tool (not vice versa) \" (p.3). Tinkerplots is a data analysis tool for the middle school that we are designing \" from the bottom up \" (Konold & Miller, 2001). When a data set is first opened in Tinkerplots, a plot window appears showing a haphazard arrangement of data icons on the screen. As in Tabletop (see Hancock, Kaput & Goldsmith, 1992), each icon represents an individual case. But in Tinkerplots, rather than choosing from a menu of existing plot types (e.g., bar graph, pie chart, scatterplot), students progressively organize the data using a small set of intuitive operators including \" stack, \" \" order, \" and \" separate \". By using these operators in different combinations, …", "title": "" }, { "docid": "aefa4679339bea8e15da21d5ecfb38e9", "text": "Oxytocin is a neuropeptide that is active in the central nervous system and is generally considered to be involved in prosocial behaviors and feelings. In light of its documented positive effect on maternal behavior, we designed a study to ascertain whether oxytocin exerts any therapeutic effects on depressive symptoms in women affected by maternal postnatal depression. A group of 16 mothers were recruited in a randomized double-blind study: the women agreed to take part in a brief course of psychoanalytic psychotherapy (12 sessions, once a week) while also being administered, during the 12-weeks period, a daily dose of intranasal oxytocin (or a placebo). The pre-treatment evaluation also included a personality assessment of the major primary-process emotional command systems described by Panksepp () and a semi-quantitative assessment by the therapist of the mother's depressive symptoms and of her personality. No significant effect on depressive symptomatology was found following the administration of oxytocin (as compared to a placebo) during the period of psychotherapy. Nevertheless, a personality trait evaluation of the mothers, conducted in our overall sample group, showed a decrease in the narcissistic trait only within the group who took oxytocin. The depressive (dysphoric) trait was in fact significantly affected by psychotherapy (this effect was only present in the placebo group so it may reflect a positive placebo effect enhancing the favorable influence of psychotherapy on depressive symptoms) but not in the presence of oxytocin. Therefore, the neuropeptide would appear to play some role in the modulation of cerebral functions involved in the self-centered (narcissistic) dimension of the suffering that can occur with postnatal depression. Based on these results, there was support for our hypothesis that what is generally defined as postnatal depression may include disturbances of narcissistic affective balance, and oxytocin supplementation can counteract that type of affective disturbance. The resulting improvements in well-being, reflected in better self-centering in post-partuent mothers, may in turn facilitate better interpersonal acceptance of (and interactions with) the child and thereby, improved recognition of the child's needs.", "title": "" }, { "docid": "0b3e7b6b47f51dc75c99f59e3aa79b52", "text": "This brief presents a frequency-domain analysis of the latch comparator offset due to load capacitor mismatch. Although the analysis is applied to the static latch comparator, the developed method can be extended to the dynamic latch comparator.", "title": "" }, { "docid": "8804339c2c8d1d0471bc26dd0d4432c2", "text": "In this paper, we present an elegant solution to the 2D LIDAR-camera extrinsic calibration problem. Specifically, we develop a simple method for establishing correspondences between a line-scan (2D) LIDAR and a camera using a small calibration target that only contains a straight line. Moreover, we formulate the nonlinear least-squares problem for finding the unknown 6 degree-of-freedom (dof) transformation between the two sensors, and solve it analytically to determine its global minimum. Additionally, we examine the conditions under which the unknown transformation becomes unobservable, which can be used for avoiding ill-conditioned configurations. Finally, we present extensive simulation and experimental results for assessing the performance of the proposed algorithm as compared to alternative analytical approaches.", "title": "" }, { "docid": "5c6bdb80f470d7b9b0e2acd57cb23295", "text": "We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.", "title": "" }, { "docid": "34901b8e3e7667e3a430b70a02595f69", "text": "In the previous NTCIR8-GeoTime task, ABRIR (Appropriate Boolean query Reformulation for Information Retrieval) proved to be one of the most effective systems for retrieving documents with Geographic and Temporal constraints. However, failure analysis showed that the identification of named entities and relationships between these entities and the query is important in improving the quality of the system. In this paper, we propose to use Wikipedia and GeoNames as resources for extracting knowledge about named entities. We also modify our system to use such information.", "title": "" }, { "docid": "2f2cdb8a78e5c8543e243ad84b2482df", "text": "The study of intergenerational mobility and most population research are governed by a two-generation (parent-to-offspring) view of intergenerational influence, to the neglect of the effects of grandparents and other ancestors and nonresident contemporary kin. While appropriate for some populations in some periods, this perspective may omit important sources of intergenerational continuity of family-based social inequality. Social institutions, which transcend individual lives, help support multigenerational influence, particularly at the extreme top and bottom of the social hierarchy, but to some extent in the middle as well. Multigenerational influence also works through demographic processes because families influence subsequent generations through differential fertility and survival, migration, and marriage patterns, as well as through direct transmission of socioeconomic rewards, statuses, and positions. Future research should attend more closely to multigenerational effects; to the tandem nature of demographic and socioeconomic reproduction; and to data, measures, and models that transcend coresident nuclear families.", "title": "" }, { "docid": "94ade8e5d8984506b2500835f973fc56", "text": "Sentiment analysis is a text categorization problem that consists in automatically assigning text documents to pre- defined classes that represent sentiments or a positive/negative opinion about a subject. To solve this task, machine learning techniques can be used. However, in order to achieve good gen- eralization, these techniques require a thorough pre-processing and an apropriate data representation. To deal with these fundamental issues, this work proposes the use of convolutional neural networks and density-based clustering algorithms. The word representations used in this work were obtained from vectors previously trained in an unsupervised way, denominated word embeddings. These representations are able to capture syntactic and semantic information of words, which leads to similar words to be projected closer together in the semantic space. In this scenario, in order to improve the performance of the convolutional neural network, the use of a clustering algorithm in the semantic space to extract additional information from the data is proposed. A density-based clustering algorithm was used to detect and remove outliers from the documents to be classified before these documents were used to train the con- volutional neural network. We conducted experiments with two different embeddings across three datasets in order to validate the effectiveness of our method. Results show that removing outliers from documents is capable of slightly improving the accuracy of the model and reducing computational cost for the non-static training approach. (0)", "title": "" }, { "docid": "52ce8c1259050f403723ec38782898f1", "text": "Indian population is growing very fast and is responsible for posing various environmental risks like traffic noise which is the primitive contributor to the overall noise pollution in urban environment. So, an attempt has been made to develop a web enabled application for spatio-temporal semantic analysis of traffic noise of one of the urban road segments in India. Initially, a traffic noise model was proposed for the study area based on the Calixto model. Later, a City Geographic Markup Language (CityGML) model, which is an OGC encoding standard for 3D data representation, was developed and stored into PostGIS. A web GIS framework was implemented for simulation of traffic noise level mapped on building walls using the data from PostGIS. Finally, spatio-temporal semantic analysis to quantify the effects in terms of threshold noise level, number of walls and roofs affected from start to the end of the day, was performed.", "title": "" }, { "docid": "7fac4b577b72cc3efb3a84cc6001bae8", "text": "When detecting and recording the EMG signal, there are two main issues of concern that influence the fidelity of the signal. The first is the signal to noise ratio. That is, the ratio of the energy in the EMG signal to the energy in the noise signal. In general, noise is defined as electrical signals that are not part of the wanted EMG signal. The other is the distortion of the signal, meaning that the relative contribution of any frequency component in the EMG signal should not be altered. It is well established that the amplitude of the EMG signal is stochastic (random) in nature and can be reasonably represented by a Gausian distribution function. The amplitude of the signal can range from 0 to 10 mV (peak-to-peak) or 0 to 1.5 mV (rms). The usable energy of the signal is limited to the 0 to 500 Hz frequency range, with the dominant energy being in the 50-150 Hz range. Usable signals are those with energy above the electrical noise level. An example of the frequency spectrum of the EMG signal is presented in Figure 1. Figure 1: Frequency spectrum of the EMG signal detected from the Tibialis Anterior muscle during a constant force isometric contraction at 50% of voluntary maximum.", "title": "" }, { "docid": "94a5e443ff4d6a6decdf1aeeb1460788", "text": "Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure— e.g. grammar, semantics or syntax— from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources. In this thesis, we present so-called unsupervisedmethods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data. In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis. Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be", "title": "" }, { "docid": "65dc64d7ea66d8c1a37c668741967496", "text": "Recently, path norm was proposed as a new capacity measure for neural networks with Rectified Linear Unit (ReLU) activation function, which takes the rescaling-invariant property of ReLU into account. It has been shown that the generalization error bound in terms of the path norm explains the empirical generalization behaviors of the ReLU neural networks better than that of other capacity measures. Moreover, optimization algorithms which take path norm as the regularization term to the loss function, like Path-SGD, have been shown to achieve better generalization performance. However, the path norm counts the values of all paths, and hence the capacity measure based on path norm could be improperly influenced by the dependency among different paths. It is also known that each path of a ReLU network can be represented by a small group of linearly independent basis paths with multiplication and division operation, which indicates that the generalization behavior of the network only depends on only a few basis paths. Motivated by this, we propose a new norm Basis-path Norm based on a group of linearly independent paths to measure the capacity of neural networks more accurately. We establish a generalization error bound based on this basis path norm, and show it explains the generalization behaviors of ReLU networks more accurately than previous capacity measures via extensive experiments. In addition, we develop optimization algorithms which minimize the empirical risk regularized by the basis-path norm. Our experiments on benchmark datasets demonstrate that the proposed regularization method achieves clearly better performance on the test set than the previous regularization approaches.", "title": "" }, { "docid": "b776bf3acb830552eb1ecf353b08edee", "text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.", "title": "" }, { "docid": "bca225a3a22ed06e038da5ded78d03ee", "text": "Shaping the nasal tip is one of the most complex and challenging components of rhinoplasty surgery. There has been a gradual progression away from cartilage splitting and morselization techniques tomore conservativemethods that rely on suturing or structural grafting to reshape and reinforce native tissue geometry. In parallel with these changes and the broad adoption of the open structure rhinoplasty approach, more emphasis has been placed on maintaining native cartilage structure. From the standpoint of structural dynamics and mechanics, rhinoplasty has further transitioned toward a paradigm where tensile and compressive forces are balanced to achieve stable structure, rather than relying upon structural grafting alone to alter tissue curvature or achieve stability. Correction of the broad nasal tip due to convexity of the lateral crura has remained a challenge, and multiple techniques have been developed to address excessive curvature in this region.1 Contemporary methods include the use of lateral crural mattress sutures,2,3 lateral crural turn-in and turn-over flaps,4–7 lateral crural strut grafts,8,9 lateral crural splitting,10 and lateral crural repositioning.11,12 Excisional techniques, such as cephalic trim, weaken structural support mechanisms and can lead to retraction and collapse.13 In contrast, suturing techniques preserve structure and are in principal adjustable and reversible, which is particularly important as rhinoplasty has a historically high revision rate. Davis has recently published an innovative approach for correcting convexity of the lateral crus, referred to as lateral crural tensioning (LCT).13–15 This technique involves performing an aggressive lateral crural steal maneuver16–19 in tandem with using a caudal septal extension graft (CSEG).", "title": "" }, { "docid": "731c5544759a958272e08f928bd364eb", "text": "A key method of reducing morbidity and mortality is childhood immunization, yet in 2003 only 69% of Filipino children received all suggested vaccinations. Data from the 2003 Philippines Demographic Health Survey were used to identify risk factors for non- and partial-immunization. Results of the multinomial logistic regression analyses indicate that mothers who have less education, and who have not attended the minimally-recommended four antenatal visits are less likely to have fully immunized children. To increase immunization coverage in the Philippines, knowledge transfer to mothers must improve.", "title": "" }, { "docid": "c7eca96393cfd88bda265fb9bcaa4630", "text": "According to the World Health Organization, around 28–35% of people aged 65 and older fall each year. This number increases to around 32–42% for people over 70 years old. For this reason, this research targets the exploration of the role of Convolutional Neural Networks(CNN) in human fall detection. There are a number of current solutions related to fall detection; however, remain low detection accuracy. Although CNN has proven a powerful technique for image recognition problems, and the CNN library in Matlab was designed to work with either images or matrices, this research explored how to apply CNN to streaming sensor data, collected from Body Sensor Networks (BSN), in order to improve the fall detection accuracy. The idea of this research is that given the stream data sets as input, we converted them into images before applying CNN. The final accuracy result achieved is, to the best of our knowledge, the highest compared to other proposed methods: 92.3%.", "title": "" }, { "docid": "79351983ed6ba7bd3400b1a08c458fde", "text": "The intranuclear location of genomic loci and the dynamics of these loci are important parameters for understanding the spatial and temporal regulation of gene expression. Recently it has proven possible to visualize endogenous genomic loci in live cells by the use of transcription activator-like effectors (TALEs), as well as modified versions of the bacterial immunity clustered regularly interspersed short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) system. Here we report the design of multicolor versions of CRISPR using catalytically inactive Cas9 endonuclease (dCas9) from three bacterial orthologs. Each pair of dCas9-fluorescent proteins and cognate single-guide RNAs (sgRNAs) efficiently labeled several target loci in live human cells. Using pairs of differently colored dCas9-sgRNAs, it was possible to determine the intranuclear distance between loci on different chromosomes. In addition, the fluorescence spatial resolution between two loci on the same chromosome could be determined and related to the linear distance between them on the chromosome's physical map, thereby permitting assessment of the DNA compaction of such regions in a live cell.", "title": "" }, { "docid": "8bd510eecc82eee91ecd0b4650da28ed", "text": "BACKGROUND AND OBJECTIVES\nLow-intensity laser therapy (LILT) has been studied in many fields of dentistry, but to our knowledge, this is the first time that its effects on orthodontic movement velocity in humans are investigated.\n\n\nSTUDY DESIGN/PATIENTS AND METHODS\nEleven patients were recruited for this 2-month study. One half of the upper arcade was considered control group (CG) and received mechanical activation of the canine teeth every 30 days. The opposite half received the same mechanical activation and was also irradiated with a diode laser emitting light at 780 nm, during 10 seconds at 20 mW, 5 J/cm2, on 4 days of each month. Data of the biometrical progress of both groups were statistically compared.\n\n\nRESULTS\nAll patients showed significant higher acceleration of the retraction of canines on the side treated with LILT when compared to the control.\n\n\nCONCLUSIONS\nOur findings suggest that LILT does accelerate human teeth movement and could therefore considerably shorten the whole treatment duration.", "title": "" }, { "docid": "2e288b78b50cd771f4c918794c3e9046", "text": "Traditional approaches to Relation Extraction from text require manually defining the relations to be extracted. We propose here an approach to automatically discovering relevant relations, given a large text corpus plus an initial ontology defining hundreds of noun categories (e.g., Athlete, Musician, Instrument). Our approach discovers frequently stated relations between pairs of these categories, using a two step process. For each pair of categories (e.g., Musician and Instrument) it first coclusters the text contexts that connect known instances of the two categories, generating a candidate relation for each resulting cluster. It then applies a trained classifier to determine which of these candidate relations is semantically valid. Our experiments apply this to a text corpus containing approximately 200 million web pages and an ontology containing 122 categories from the NELL system [Carlson et al., 2010b], producing a set of 781 proposed candidate relations, approximately half of which are semantically valid. We conclude this is a useful approach to semi-automatic extension of the ontology for large-scale information extraction systems such as NELL.", "title": "" }, { "docid": "8e3b73204d1d62337c4b2aabdbaa8973", "text": "The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.", "title": "" } ]
scidocsrr
a74e2c3798f7a14f0a498802fb5cd275
Improving trace accuracy through data-driven configuration and composition of tracing features
[ { "docid": "f391c56dd581d965548062944200e95f", "text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.", "title": "" } ]
[ { "docid": "dd9942a62311e363d4b3641324dbd96a", "text": "A series of diurnal airborne campaigns were conducted over an orchard field to assess the canopy Photochemical Reflectance Index (PRI) as an indicator of water stress. Airborne campaigns over two years were conducted with the Airborne Hyperspectral Scanner (AHS) over an orchard field to investigate changes in PRI, in the Transformed Chlorophyll Absorption in Reflectance Index (TCARI) normalized by the Optimized SoilAdjusted Vegetation Index (OSAVI) (TCARI/OSAVI), and in the Normalized Difference Vegetation Index (NDVI) as function of field-measured physiological indicators of water stress, such as stomatal conductance, stem water potential, steady-state fluorescence, and crown temperature. The AHS sensor was flown at three times on each 2004 and 2005 years, collecting 2 m spatial resolution imagery in 80 spectral bands in the 0.43– 12.5 μm spectral range. Indices PRI, TCARI/OSAVI, and NDVI were calculated from reflectance bands, and thermal bands were assessed for the retrieval of land surface temperature, separating pure crowns from shadows and sunlit soil pixels. The Photochemical Reflectance Index, originally developed for xanthophyll cycle pigment change detection was calculated to assess its relationship with water stress at a canopy level, and more important, to assess canopy structural and viewing geometry effects for water stress detection in diurnal airborne experiments. The FLIGHT 3D canopy reflectance model was used to simulate the bi-directional reflectance changes as function of the viewing geometry, background and canopy structure. This manuscript demonstrates that the airborne-level PRI index is sensitive to the de-epoxidation of the xanthophyll pigment cycle caused by water stress levels, but affected diurnally by the confounding effects of BRDF. Among the three vegetation indices calculated, only airborne PRI demonstrated sensitivity to diurnal changes in physiological indicators of water stress, such as canopy temperature minus air temperature (Tc–Ta), stomatal conductance (G), and stem water potential (ψ) measured in the field at the time of each image acquisition. No relationships were found from the diurnal experiments between NDVI and TCARI/OSAVI with the tree-measured physiological measures. FLIGHT model simulations of PRI demonstrated that PRI is highly affected by the canopy structure and background. © 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2c2942905010e71cda5f8b0f41cf2dd0", "text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .", "title": "" }, { "docid": "db0cac6172c63eb5b91b2e29d037cc63", "text": "In this article, we address open challenges in large-scale classification, focusing on how to effectively leverage the dependency structures (hierarchical or graphical) among class labels, and how to make the inference scalable in jointly optimizing all model parameters. We propose two main approaches, namely the hierarchical Bayesian inference framework and the recursive regularization scheme. The key idea in both approaches is to reinforce the similarity among parameter across the nodes in a hierarchy or network based on the proximity and connectivity of the nodes. For scalability, we develop hierarchical variational inference algorithms and fast dual coordinate descent training procedures with parallelization. In our experiments for classification problems with hundreds of thousands of classes and millions of training instances with terabytes of parameters, the proposed methods show consistent and statistically significant improvements over other competing approaches, and the best results on multiple benchmark datasets for large-scale classification.", "title": "" }, { "docid": "15a0898247365fa5ff29fd54560f547d", "text": "SemEval 2018 Task 7 focuses on relation extraction and classification in scientific literature. In this work, we present our tree-based LSTM network for this shared task. Our approach placed 9th (of 28) for subtask 1.1 (relation classification), and 5th (of 20) for subtask 1.2 (relation classification with noisy entities). We also provide an ablation study of features included as input to the network.", "title": "" }, { "docid": "b759613b1eedd29d32fbbc118767b515", "text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.", "title": "" }, { "docid": "15709a8aecbf8f4f35bf47b79c3dca03", "text": "We introduce a new approach to hierarchy formation and task decomposition in hierarchical reinforcement learning. Our method is based on the Hierarchy Of Abstract Machines (HAM) framework because HAM approach is able to design efficient controllers that will realize specific behaviors in real robots. The key to our algorithm is the introduction of the internal or “mental” environment in which the state represents the structure of the HAM hierarchy. The internal action in this environment leads to changes the hierarchy of HAMs. We propose the classical Qlearning procedure in the internal environment which allows the agent to obtain an optimal hierarchy. We extends the HAM framework by adding on-model approach to select the appropriate sub-machine to execute action sequences for certain class of external environment states. Preliminary experiments demonstrated the prospects of the method.", "title": "" }, { "docid": "483c87e4ad58596f4651e4e63c501579", "text": "Chitosan, a polyaminosaccharide obtained by alkaline deacetylation of chitin, possesses useful properties including biodegradability, biocompatibility, low toxicity, and good miscibility with other polymers. It is extensively used in many applications in biology, medicine, agriculture, environmental protection, and the food and pharmaceutical industries. The amino and hydroxyl groups present in the chitosan backbone provide positions for modifications that are influenced by factors such as the molecular weight, viscosity, and type of chitosan, as well as the reaction conditions. The modification of chitosan by chemical methods is of interest because the basic chitosan skeleton is not modified and the process results in new or improved properties of the material. Among the chitosan derivatives, cyclodextrin-grafted chitosan and poly(ethylene glycol)-grafted chitosan are excellent candidates for a range of biomedical, environmental decontamination, and industrial purposes. This work discusses modifications including chitosan with attached cyclodextrin and poly(ethylene glycol), and the main applications of these chitosan derivatives in the biomedical field.", "title": "" }, { "docid": "4bec71105c8dca3d0b48e99cdd4e809a", "text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.", "title": "" }, { "docid": "19d79b136a9af42ac610131217de8c08", "text": "The aim of the experimental study described in this article is to investigate the effect of a lifelike character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character’s affective response to the user’s performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress see front matter r 2004 Elsevier Ltd. All rights reserved. .ijhcs.2004.11.009 cle is a significantly revised and extended version of Prendinger et al. (2003). nding author. Tel.: +813 4212 2650; fax: +81 3 3556 1916. dresses: helmut@nii.ac.jp (H. Prendinger), jmori@miv.t.u-tokyo.ac.jp (J. Mori), v.t.u-tokyo.ac.jp (M. Ishizuka).", "title": "" }, { "docid": "72b246820952b752bd001212e5f0dd2e", "text": "This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i) Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii) Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii) Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.", "title": "" }, { "docid": "e8e8e6d288491e715177a03601500073", "text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.", "title": "" }, { "docid": "76d029c669e84e420c8513bd837fb59b", "text": "Since its original publication, the Semi-Global Matching (SGM) technique has been re-implemented by many researchers and companies. The method offers a very good trade off between runtime and accuracy, especially at object borders and fine structures. It is also robust against radiometric differences and not sensitive to the choice of parameters. Therefore, it is well suited for solving practical problems. The applications reach from remote sensing, like deriving digital surface models from aerial and satellite images, to robotics and driver assistance systems. This paper motivates and explains the method, shows current developments as well as examples from various applications.", "title": "" }, { "docid": "fb6d89e2faee942a0a92ded6ead0d8c7", "text": "Each relationship has its own personality. Almost immediately after a social interaction begins, verbal and nonverbal behaviors become synchronized. Even in asocial contexts, individuals tend to produce utterances that match the grammatical structure of sentences they have recently heard or read. Three projects explore language style matching (LSM) in everyday writing tasks and professional writing. LSM is the relative use of 9 function word categories (e.g., articles, personal pronouns) between any 2 texts. In the first project, 2 samples totaling 1,744 college students answered 4 essay questions written in very different styles. Students automatically matched the language style of the target questions. Overall, the LSM metric was internally consistent and reliable across writing tasks. Women, participants of higher socioeconomic status, and students who earned higher test grades matched with targets more than others did. In the second project, 74 participants completed cliffhanger excerpts from popular fiction. Judges' ratings of excerpt-response similarity were related to content matching but not function word matching, as indexed by LSM. Further, participants were not able to intentionally increase style or content matching. In the final project, an archival study tracked the professional writing and personal correspondence of 3 pairs of famous writers across their relationships. Language matching in poetry and letters reflected fluctuations in the relationships of 3 couples: Sigmund Freud and Carl Jung, Elizabeth Barrett and Robert Browning, and Sylvia Plath and Ted Hughes. Implications for using LSM as an implicit marker of social engagement and influence are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved).", "title": "" }, { "docid": "9fb9664eea84d3bc0f59f7c4714debc1", "text": "International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the `smart' devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security.", "title": "" }, { "docid": "ea624ba3a83c4f042fb48f4ebcba705a", "text": "Using magnetic field data as fingerprints for smartphone indoor positioning has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which result in unreliable systems, or impose strong restrictions on smartphone such as fixed phone orientation, which are not practical for real-life use. In this paper, we present a novel indoor positioning system for smartphones, which is built on our proposed reliability-augmented particle filter. We create several innovations on the motion model, the measurement model, and the resampling model to enhance the basic particle filter. To minimize errors in motion estimation and improve the robustness of the basic particle filter, we propose a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model, combining a new magnetic fingerprinting model and the existing magnitude fingerprinting model, to improve system performance, and importantly avoid calibrating magnetometers for different smartphones. In addition, we propose an adaptive sampling algorithm to reduce computation overhead, which in turn improves overall usability tremendously. Finally, we also analyze the “Kidnapped Robot Problem” and present a practical solution. We conduct comprehensive experimental studies, and the results show that our system achieves an accuracy of 1~2 m on average in a large building.", "title": "" }, { "docid": "a7d9ac415843146b82139e50edf4ccf2", "text": "Recommender Systems (RSs) are software tools and techniques providing suggestions of relevant items to users. These systems have received increasing attention from both academy and industry since the 90’s, due to a variety of practical applications as well as complex problems to solve. Since then, the number of research papers published has increased significantly in many application domains (books, documents, images, movies, music, shopping, TV programs, and others). One of these domains has our attention in this paper due to the massive proliferation of televisions (TVs) with computational and network capabilities and due to the large amount of TV content and TV-related content available on the Web. With the evolution of TVs and RSs, the diversity of recommender systems for TV has increased substantially. In this direction, it is worth mentioning that we consider “recommender systems for TV” as those that make recommendations of both TV-content and any content related to TV. Due to this diversity, more investigation is necessary because research on recommender systems for TV domain is still broader and less mature than in other research areas. Thus, this literature review (LR) seeks to classify, synthesize, and present studies according to different perspectives of RSs in the television domain. For that, we initially identified, from the scientific literature, 282 relevant papers published from 2003 to May, 2015. The papers were then categorized and discussed according to different research and development perspectives: recommended item types, approaches, algorithms, architectural models, output devices, user profiling and evaluation. The obtained results can be useful to reveal trends and opportunities for both researchers and practitioners in the area.", "title": "" }, { "docid": "20710cf5fac30800217c5b9568d3541a", "text": "BACKGROUND\nAcne scarring is treatable by a variety of modalities. Ablative carbon dioxide laser (ACL), while effective, is associated with undesirable side effect profiles. Newer modalities using the principles of fractional photothermolysis (FP) produce modest results than traditional carbon dioxide (CO(2)) lasers but with fewer side effects. A novel ablative CO(2) laser device use a technique called ablative fractional resurfacing (AFR), combines CO(2) ablation with a FP system. This study was conducted to compare the efficacy of Q-switched 1064-nm Nd: YAG laser and that of fractional CO(2) laser in the treatment of patients with moderate to severe acne scarring.\n\n\nMETHODS\nSixty four subjects with moderate to severe facial acne scars were divided randomly into two groups. Group A received Q-Switched 1064-nm Nd: YAG laser and group B received fractional CO(2) laser. Two groups underwent four session treatment with laser at one month intervals. Results were evaluated by patients based on subjective satisfaction and physicians' assessment and photo evaluation by two blinded dermatologists. Assessments were obtained at baseline and at three and six months after final treatment.\n\n\nRESULTS\nPost-treatment side effects were mild and transient in both groups. According to subjective satisfaction (p = 0.01) and physicians' assessment (p < 0.001), fractional CO(2) laser was significantly more effective than Q- Switched 1064- nm Nd: YAG laser.\n\n\nCONCLUSIONS\nFractional CO2 laser has the most significant effect on the improvement of atrophic facial acne scars, compared with Q-Switched 1064-nm Nd: YAG laser.", "title": "" }, { "docid": "0aa7a61ae2d73b017b5acdd885d7c0ef", "text": "3GPP Long Term Evolution-Advanced (LTE-A) aims at enhancement of LTE performance in many respects including the system capacity and network coverage. This enhancement can be accomplished by heterogeneous networks (HetNets) where additional micro-nodes that require lower transmission power are efficiently deployed. More careful management of mobility and handover (HO) might be required in HetNets compared to homogeneous networks where all nodes require the same transmission power. In this article, we provide a technical overview of mobility and HO management for HetNets in LTEA. Moreover, we investigate the A3-event which requires a certain criterion to be met for HO. The criterion involves the reference symbol received power/quality of user equipment (UE), hysteresis margin, and a number of offset parameters based on proper HO timing, i.e., time-to-trigger (TTT). Optimum setting of these parameters are not trivial task, and has to be determined depending on UE speed, propagation environment, system load, deployed HetNets configuration, etc. Therefore, adaptive TTT values with given hysteresis margin for the lowest ping pong rate within 2 % of radio link failure rate depending on UE speed and deployed HetNets configuration are investigated in this article.", "title": "" }, { "docid": "9172d4ba2e86a7d4918ef64d7b837084", "text": "Electromagnetic generators (EMGs) and triboelectric nanogenerators (TENGs) are the two most powerful approaches for harvesting ambient mechanical energy, but the effectiveness of each depends on the triggering frequency. Here, after systematically comparing the performances of EMGs and TENGs under low-frequency motion (<5 Hz), we demonstrated that the output performance of EMGs is proportional to the square of the frequency, while that of TENGs is approximately in proportion to the frequency. Therefore, the TENG has a much better performance than that of the EMG at low frequency (typically 0.1-3 Hz). Importantly, the extremely small output voltage of the EMG at low frequency makes it almost inapplicable to drive any electronic unit that requires a certain threshold voltage (∼0.2-4 V), so that most of the harvested energy is wasted. In contrast, a TENG has an output voltage that is usually high enough (>10-100 V) and independent of frequency so that most of the generated power can be effectively used to power the devices. Furthermore, a TENG also has advantages of light weight, low cost, and easy scale up through advanced structure designs. All these merits verify the possible killer application of a TENG for harvesting energy at low frequency from motions such as human motions for powering small electronics and possibly ocean waves for large-scale blue energy.", "title": "" }, { "docid": "f83b5593f24eb3ac549699d2d43f7e8a", "text": "As economic globalization intensifies competition and creates a climate of constant change, winning and keeping customers has never been more important. Nowadays, Banks have realized that customer relationships are a very important factor for their success. Customer relationship management (CRM) is a strategy that can help them to build long-lasting relationships with their customers and increase their revenues and profits. CRM in the banking sector is of greater importance. The aim of this study is to explore and analyze the strategic implementation of CRM in selected banks of Pakistan, identify the benefits, the problems, as well as the success and failure factors of the implementation and develop a better understanding of CRM impact on banking competitiveness as well as provide a greater understanding of what constitutes good CRM practices. In this study, CMAT (Customer Management Assessment Tool) model is used which encompasses all the essential elements of practical customer relationship management. Data is collected through questionnaires from the three major banks (HBL, MCB, and Citibank) of Pakistan. The evidence supports that CRM is gradually being practiced in studied banks; however the true spirit of CRM is still needed to be on the active agenda of the banking sector in Pakistan. This study contributes to the financial services literature as it is one of the very few that have examined CRM applications, a comparatively new technology, in the Pakistani banking sector, where very limited research has taken place on the implementation of CRM.", "title": "" } ]
scidocsrr
77390a5ce1fe710ff222b952c540085e
Maiter: An Asynchronous Graph Processing Framework for Delta-Based Accumulative Iterative Computation
[ { "docid": "09f91a5fb0d54f6cc753f321c81d518f", "text": "Proximity measures quantify the closeness or similarity between nodes in a social network and form the basis of a range of applications in social sciences, business, information technology, computer networks, and cyber security. It is challenging to estimate proximity measures in online social networks due to their massive scale (with millions of users) and dynamic nature (with hundreds of thousands of new nodes and millions of edges added daily). To address this challenge, we develop two novel methods to efficiently and accurately approximate a large family of proximity measures. We also propose a novel incremental update algorithm to enable near real-time proximity estimation in highly dynamic social networks. Evaluation based on a large amount of real data collected in five popular online social networks shows that our methods are accurate and can easily scale to networks with millions of nodes.\n To demonstrate the practical values of our techniques, we consider a significant application of proximity estimation: link prediction, i.e., predicting which new edges will be added in the near future based on past snapshots of a social network. Our results reveal that (i) the effectiveness of different proximity measures for link prediction varies significantly across different online social networks and depends heavily on the fraction of edges contributed by the highest degree nodes, and (ii) combining multiple proximity measures consistently yields the best link prediction accuracy.", "title": "" } ]
[ { "docid": "6ab6a9625db7b13116f41b19b8ecb62c", "text": "We describe a set of experiments using a wide range of machine learning techniques for the task of predicting the rhetorical status of sentences. The research is part of a text summarisation project for the legal domain for which we use a new corpus of judgments of the UK House of Lords. We present experimental results for classification according to a rhetorical scheme indicating a sentence's contribution to the overall argumentative structure of the legal judgments using four learning algorithms from the Weka package (C4.5, naïve Bayes, Winnow and SVMs). We also report results using maximum entropy models both in a standard classification framework and in a sequence labelling framework. The SVM classifier and the maximum entropy sequence tagger yield the most promising results.", "title": "" }, { "docid": "f132d1e91058ebc9484464e006a16da0", "text": "We propose drl-RPN, a deep reinforcement learning-based visual recognition model consisting of a sequential region proposal network (RPN) and an object detector. In contrast to typical RPNs, where candidate object regions (RoIs) are selected greedily via class-agnostic NMS, drl-RPN optimizes an objective closer to the final detection task. This is achieved by replacing the greedy RoI selection process with a sequential attention mechanism which is trained via deep reinforcement learning (RL). Our model is capable of accumulating class-specific evidence over time, potentially affecting subsequent proposals and classification scores, and we show that such context integration significantly boosts detection accuracy. Moreover, drl-RPN automatically decides when to stop the search process and has the benefit of being able to jointly learn the parameters of the policy and the detector, both represented as deep networks. Our model can further learn to search over a wide range of exploration-accuracy trade-offs making it possible to specify or adapt the exploration extent at test time. The resulting search trajectories are image- and category-dependent, yet rely only on a single policy over all object categories. Results on the MS COCO and PASCAL VOC challenges show that our approach outperforms established, typical state-of-the-art object detection pipelines.", "title": "" }, { "docid": "fe70c7614c0414347ff3c8bce7da47e7", "text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.", "title": "" }, { "docid": "d756896358214822246fcb8f247248ad", "text": "Text-based Captchas have been widely deployed across the Internet to defend against undesirable or malicious bot programs. Many attacks have been proposed; these fine prior art advanced the scientific understanding of Captcha robustness, but most of them have a limited applicability. In this paper, we report a simple, low-cost but powerful attack that effectively breaks a wide range of text Captchas with distinct design features, including those deployed by Google, Microsoft, Yahoo!, Amazon and other Internet giants. For all the schemes, our attack achieved a success rate ranging from 5% to 77%, and achieved an average speed of solving a puzzle in less than 15 seconds on a standard desktop computer (with a 3.3GHz Intel Core i3 CPU and 2 GB RAM). This is to date the simplest generic attack on text Captchas. Our attack is based on Log-Gabor filters; a famed application of Gabor filters in computer security is John Daugman’s iris recognition algorithm. Our work is the first to apply Gabor filters for breaking Captchas.", "title": "" }, { "docid": "43085c5afcf3a576a3f2169de4402645", "text": "In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.", "title": "" }, { "docid": "0bfbd3c2273350c47abf3c786ffeb40d", "text": "Fifty years ago, on June 26, 1954, in the town of Obninsk, near Moscow in the former USSR, the first nuclear power plant was connected to an electricity grid to provide power. This was the world's first nuclear power plant to generate electricity for a power grid, and produced around 5 MWe [1]. This first nuclear reactor was built twelve years after the occurrence of the first controlled fission reaction on December 2, 1942, at the Manhattan Engineering Dis‐ trict, in Chicago, Illinois, US. In 1955 the USS Nautilus, the first nuclear propelled submar‐ ine, equipped with a pressurized water reactor (PWR), was launched. The race for nuclear technology spanned several countries and soon commercial reactors, called first generation nuclear reactors, were built in the US (Shippingport, a 60 MWe PWR, operated 1957-1982, Dresden, a boiling water reactor, BWR, operated 1960-1978, and Fermi I, a fast breeder reac‐ tor, operated 1957-1972) and the United Kingdom (Magnox, a pressurized, carbon dioxide cooled, graphite-moderated reactor using natural uranium).", "title": "" }, { "docid": "dd1e7bb3ba33c5ea711c0d066db53fa9", "text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.", "title": "" }, { "docid": "fb1b80f1e7109b382994ca61b993ad71", "text": "We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.", "title": "" }, { "docid": "a56f1d8fb72393cbafb92df51dfe3239", "text": "Approximately 2,800 years ago, a blind poet wandered from city to city in Greece telling a tall tale—that of a nobleman, war hero, and daring adventurer who did not catch sight of his homeland for 20 long years. The poet was Homer and the larger-than-life character was Odysseus. Homer sang the epic adventures of cunning Odysseus who fought the Trojan War for 10 years and labored for another 10 on a die-hard mission to return to his homeland, the island of Ithaca, and reunite with his loyal wife, Penelope, and their son, Telemachus. Three of the 10 return years were spent on sea, facing the wrath of Gods, monsters, and assorted evil-doers. The other 7 were spent on the island of Ogygia, in the seducing arms of a nymph, the beautiful and possessive Calypso. Yet, despite this dolce vita, Odysseus never took his mind off Ithaca, refusing Calypso’s offer to make him immortal. On the edge of ungratefulness, he confided to his mistress, “Full well I acknowledge Prudent Penelope cannot compare with your stature or beauty, for she is only a mortal, and you are immortal and ageless. Nevertheless it is she whom I daily desire and pine for. Therefore I long for my home and to see the day of returning” (Homer, The Odyssey, trans. 1921, Book V, pp. 78–79). 1 Return was continually on Odysseus’ mind, and the Greek word for it is nostos. His burning wish for nostos afflicted unbearable suffering on Odysseus, and the Greek word for it is algos. Nostalgia, then, is the psychological suffering caused by unrelenting yearning to", "title": "" }, { "docid": "cf7af6838ae725794653bfce39c609b8", "text": "This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence vectorization strategy, network depth and the deep feature to predict for image to sentence matching. We also generalize Word2VisualVec for matching a video to a sentence, by extending the predictive abilities to 3-D ConvNet features as well as a visual-audio representation. Experiments on four challenging image and video benchmarks detail Word2VisualVec’s properties, capabilities for image and video to sentence matching, and on all datasets its state-of-the-art results.", "title": "" }, { "docid": "b44bf94943c26933b1d3cbab84c539f9", "text": "2004;25;194 Pediatrics in Review David S. Rosen Physiologic Growth and Development During Adolescence http://pedsinreview.aappublications.org/content/25/6/194 located on the World Wide Web at: The online version of this article, along with updated information and services, is http://pedsinreview.aappublications.org/content/suppl/2005/01/26/25.6.194.DC1.html Data Supplement (unedited) at: Pediatrics. All rights reserved. Print ISSN: 0191-9601. Boulevard, Elk Grove Village, Illinois, 60007. Copyright © 2004 by the American Academy of published, and trademarked by the American Academy of Pediatrics, 141 Northwest Point publication, it has been published continuously since 1979. Pediatrics in Review is owned, Pediatrics in Review is the official journal of the American Academy of Pediatrics. A monthly", "title": "" }, { "docid": "bd3051871dfff82a4f9852f16bd3412f", "text": "Network intrusion is a critical challenge in information and communication systems amongst other forms of fraud perpetrated over the Internet. Despite the various traditional techniques proposed to prevent this intrusion, the threat persists. These days, intrusion detection systems (IDS) are faced with detecting attacks in large streams of connections due to the sporadic increase in network traffics. Although machine learning (ML) has been introduced in IDS to deal with finding patterns in big data, the irrelevant features in the data tend to degrade both the speed and accuracy of detection of attacks. Also, it increases the computational resource needed during training and testing of IDS models. Therefore, in this paper, we seek to find the optimal feature set using discretized differential evolution (DDE) and C4.5 ML algorithm from NSL-KDD standard intrusion dataset. The result obtained shows a significant improvement in detection accuracy, a reduction in training and testing time using the reduced feature set. The method also buttresses the fact that differential evolution (DE) is not limited to optimization of continuous problems but work well for discrete optimization.", "title": "" }, { "docid": "bd4316193b5cfa465dd2a5bdca990a86", "text": "Electroporation is a fascinating cell membrane phenomenon with several existing biological applications and others likely. Although DNA introduction is the most common use, electroporation of isolated cells has also been used for: (1) introduction of enzymes, antibodies, and other biochemical reagents for intracellular assays; (2) selective biochemical loading of one size cell in the presence of many smaller cells; (3) introduction of virus and other particles; (4) cell killing under nontoxic conditions; and (5) insertion of membrane macromolecules into the cell membrane. More recently, tissue electroporation has begun to be explored, with potential applications including: (1) enhanced cancer tumor chemotherapy, (2) gene therapy, (3) transdermal drug delivery, and (4) noninvasive sampling for biochemical measurement. As presently understood, electroporation is an essentially universal membrane phenomenon that occurs in cell and artificial planar bilayer membranes. For short pulses (microsecond to ms), electroporation occurs if the transmembrane voltage, U(t), reaches 0.5-1.5 V. In the case of isolated cells, the pulse magnitude is 10(3)-10(4) V/cm. These pulses cause reversible electrical breakdown (REB), accompanied by a tremendous increase molecular transport across the membrane. REB results in a rapid membrane discharge, with the elevated U(t) returning to low values within a few microseconds of the pulse. However, membrane recovery can be orders of magnitude slower. An associated cell stress commonly occurs, probably because of chemical influxes and effluxes leading to chemical imbalances, which also contribute to eventual survival or death. Basic phenomena, present understanding of mechanism, and the existing and potential applications are briefly reviewed.", "title": "" }, { "docid": "25121ccd316cd2b9a31c7651a32f92ea", "text": "Chatbot has become an important solution to rapidly increasing customer care demands on social media in recent years. However, current work on chatbot for customer care ignores a key to impact user experience - tones. In this work, we create a novel tone-aware chatbot that generates toned responses to user requests on social media. We first conduct a formative research, in which the effects of tones are studied. Significant and various influences of different tones on user experience are uncovered in the study. With the knowledge of effects of tones, we design a deep learning based chatbot that takes tone information into account. We train our system on over 1.5 million real customer care conversations collected from Twitter. The evaluation reveals that our tone-aware chatbot generates as appropriate responses to user requests as human agents. More importantly, our chatbot is perceived to be even more empathetic than human agents.", "title": "" }, { "docid": "b271916d455789760d1aa6fda6af85c3", "text": "Over the last decade, automated vehicles have been widely researched and their massive potential has been verified through several milestone demonstrations. However, there are still many challenges ahead. One of the biggest challenges is integrating them into urban environments in which dilemmas occur frequently. Conventional automated driving strategies make automated vehicles foolish in dilemmas such as making lane-change in heavy traffic, handling a yellow traffic light and crossing a double-yellow line to pass an illegally parked car. In this paper, we introduce a novel automated driving strategy that allows automated vehicles to tackle these dilemmas. The key insight behind our automated driving strategy is that expert drivers understand human interactions on the road and comply with mutually-accepted rules, which are learned from countless experiences. In order to teach the driving strategy of expert drivers to automated vehicles, we propose a general learning framework based on maximum entropy inverse reinforcement learning and Gaussian process. Experiments are conducted on a 5.2 km-long campus road at Seoul National University and demonstrate that our framework performs comparably to expert drivers in planning trajectories to handle various dilemmas.", "title": "" }, { "docid": "995ad137b6711f254c6b9852611242b5", "text": "In this paper, we study beam selection for millimeter-wave (mm-wave) multiuser multiple input multiple output (MIMO) systems where a base station (BS) and users are equipped with antenna arrays. Exploiting a certain sparsity of mm-wave channels, a low-complexity beam selection method for beamforming by low-cost analog beamformers is derived. It is shown that beam selection can be carried out without explicit channel estimation using the notion of compressive sensing (CS). Due to various reasons (e.g., the background noise and interference), some users may choose the same BS beam, which results in high inter-user interference. To overcome this problem, we further consider BS beam selection by users. Through simulations, we show that the performance gap between the proposed approach and the optimal beamforming approach, which requires full channel state information (CSI), becomes narrower for a larger number of users at a moderate/low signal-to-noise ratio (SNR). Since the optimal beamforming approach is difficult to be used due to prohibitively high computational complexity for large antenna arrays with a large number of users, the proposed approach becomes attractive for BSs and users in mm-wave systems where large antenna arrays can be employed.", "title": "" }, { "docid": "ac6d474171bfe6bc2457bfb3674cc5a6", "text": "The energy consumption problem in the mobile industry has become crucial. For the sustainable growth of the mobile industry, energy efficiency (EE) of wireless systems has to be significantly improved. Plenty of efforts have been invested in achieving green wireless communications. This article provides an overview of network energy saving studies currently conducted in the 3GPP LTE standard body. The aim is to gain a better understanding of energy consumption and identify key EE research problems in wireless access networks. Classifying network energy saving technologies into the time, frequency, and spatial domains, the main solutions in each domain are described briefly. As presently the attention is mainly focused on solutions involving a single radio base station, we believe network solutions involving multiple networks/systems will be the most promising technologies toward green wireless access networks.", "title": "" }, { "docid": "4d58a451c018b25aaab9ab1312a0998c", "text": "This paper presents a set of techniques that makes constraint programming a technique of choice for solving small (up to 30 nodes) traveling salesman problems. These techniques include a propagation scheme to avoid intermediate cycles (a global constraint), a branching scheme and a redundant constraint that can be used as a bounding method. The resulting improvement is that we can solve problems twice larger than those solved previously with constraint programming tools. We evaluate the use of Lagrangean Relaxation to narrow the gap between constraint programming and other Operations Research techniques and we show that improved constraint propagation has now a place in the array of techniques that should be used to solve a traveling salesman problem.", "title": "" }, { "docid": "e5b0200c7fffd4ff3934969ff67de5b4", "text": "We present a proposal-\"The Sampling Hypothesis\"-suggesting that the variability in young children's responses may be part of a rational strategy for inductive inference. In particular, we argue that young learners may be randomly sampling from the set of possible hypotheses that explain the observed data, producing different hypotheses with frequencies that reflect their subjective probability. We test the Sampling Hypothesis with four experiments on 4- and 5-year-olds. In these experiments, children saw a distribution of colored blocks and an event involving one of these blocks. In the first experiment, one block fell randomly and invisibly into a machine, and children made multiple guesses about the color of the block, either immediately or after a 1-week delay. The distribution of guesses was consistent with the distribution of block colors, and the dependence between guesses decreased as a function of the time between guesses. In Experiments 2 and 3 the probability of different colors was systematically varied by condition. Preschoolers' guesses tracked the probabilities of the colors, as should be the case if they are sampling from the set of possible explanatory hypotheses. Experiment 4 used a more complicated two-step process to randomly select a block and found that the distribution of children's guesses matched the probabilities resulting from this process rather than the overall frequency of different colors. This suggests that the children's probability matching reflects sophisticated probabilistic inferences and is not merely the result of a naïve tabulation of frequencies. Taken together the four experiments provide support for the Sampling Hypothesis, and the idea that there may be a rational explanation for the variability of children's responses in domains like causal inference.", "title": "" }, { "docid": "bb9f86e800e3f00bf7b34be85d846ff0", "text": "This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.", "title": "" } ]
scidocsrr
46b9ef0704e87a27b376720750fb1259
Predicting taxi demand at high spatial resolution: Approaching the limit of predictability
[ { "docid": "d0bb1b3fc36016b166eb9ed25cb7ee61", "text": "Informed driving is increasingly becoming a key feature for increasing the sustainability of taxi companies. The sensors that are installed in each vehicle are providing new opportunities for automatically discovering knowledge, which, in return, delivers information for real-time decision making. Intelligent transportation systems for taxi dispatching and for finding time-saving routes are already exploring these sensing data. This paper introduces a novel methodology for predicting the spatial distribution of taxi-passengers for a short-term time horizon using streaming data. First, the information was aggregated into a histogram time series. Then, three time-series forecasting techniques were combined to originate a prediction. Experimental tests were conducted using the online data that are transmitted by 441 vehicles of a fleet running in the city of Porto, Portugal. The results demonstrated that the proposed framework can provide effective insight into the spatiotemporal distribution of taxi-passenger demand for a 30-min horizon.", "title": "" }, { "docid": "b294ca2034fa4133e8f7091426242244", "text": "The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility.", "title": "" } ]
[ { "docid": "c5122000c9d8736cecb4d24e6f56aab8", "text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.", "title": "" }, { "docid": "8688f904ff190f9434cf20c6fc0f7eb9", "text": "3-D shape analysis has attracted extensive research efforts in recent years, where the major challenge lies in designing an effective high-level 3-D shape feature. In this paper, we propose a multi-level 3-D shape feature extraction framework by using deep learning. The low-level 3-D shape descriptors are first encoded into geometric bag-of-words, from which middle-level patterns are discovered to explore geometric relationships among words. After that, high-level shape features are learned via deep belief networks, which are more discriminative for the tasks of shape classification and retrieval. Experiments on 3-D shape recognition and retrieval demonstrate the superior performance of the proposed method in comparison to the state-of-the-art methods.", "title": "" }, { "docid": "36e4260c43efca5a67f99e38e5dbbed8", "text": "The inherent compliance of soft fluidic actuators makes them attractive for use in wearable devices and soft robotics. Their flexible nature permits them to be used without traditional rotational or prismatic joints. Without these joints, however, measuring the motion of the actuators is challenging. Actuator-level sensors could improve the performance of continuum robots and robots with compliant or multi-degree-of-freedom joints. We make the reinforcing braid of a pneumatic artificial muscle (PAM or McKibben muscle) “smart” by weaving it from conductive insulated wires. These wires form a solenoid-like circuit with an inductance that more than doubles over the PAM contraction. The reinforcing and sensing fibers can be used to measure the contraction of a PAM actuator with a simple linear function of the measured inductance, whereas other proposed self-sensing techniques rely on the addition of special elastomers or transducers, the technique presented in this paper can be implemented without modifications of this kind. We present and experimentally validate two models for Smart Braid sensors based on the long solenoid approximation and the Neumann formula, respectively. We test a McKibben muscle made from a Smart Braid in quasi-static conditions with various end loads and in dynamic conditions. We also test the performance of the Smart Braid sensor alongside steel.", "title": "" }, { "docid": "1ca692464d5d7f4e61647bf728941519", "text": "During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.", "title": "" }, { "docid": "e444dcc97882005658aca256991e816e", "text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.", "title": "" }, { "docid": "06ba81270357c9bcf1dd8f1871741537", "text": "The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using", "title": "" }, { "docid": "4fe39e3d2e7c04263e9015c773a755fb", "text": "This paper presents a novel approach to building natural language interface to databases (NLIDB) based on Computational Paninian Grammar (CPG). It uses two distinct stages of processing, namely, syntactic processing followed by semantic processing. Syntactic processing makes the processing more general and robust. CPG is a dependency framework in which the analysis is in terms of syntactico-semantic relations. The closeness of these relations makes semantic processing easier and more accurate. It also makes the systems more portable.", "title": "" }, { "docid": "9548bd2e37fdd42d09dc6828ac4675f9", "text": "Recent years have seen increasing interest in ranking elite athletes and teams in professional sports leagues, and in predicting the outcomes of games. In this work, we draw an analogy between this problem and one in the field of search engine optimization, namely, that of ranking webpages on the Internet. Motivated by the famous PageRank algorithm, our TeamRank methods define directed graphs of sports teams based on the observed outcomes of individual games, and use these networks to infer the importance of teams that determines their rankings. In evaluating these methods on data from recent seasons in the National Football League (NFL) and National Basketball Association (NBA), we find that they can predict the outcomes of games with up to 70% accuracy, and that they provide useful rankings of teams that cluster by league divisions. We also propose some extensions to TeamRank that consider overall team win records and shifts in momentum over time.", "title": "" }, { "docid": "54dc81aca62267eecf1f5f8a8ace14b9", "text": "Advances in deep learning have led to substantial increases in prediction accuracy but have been accompanied by increases in the cost of rendering predictions. We conjecture that for a majority of real-world inputs, the recent advances in deep learning have created models that effectively “over-think” on simple inputs. In this paper we revisit the classic question of building model cascades that primarily leverage class asymmetry to reduce cost. We introduce the “I Don’t Know” (IDK) prediction cascades framework, a general framework to systematically compose a set of pre-trained models to accelerate inference without a loss in prediction accuracy. We propose two search based methods for constructing cascades as well as a new cost-aware objective within this framework. The proposed IDK cascade framework can be easily adopted in the existing model serving systems without additional model retraining. We evaluate the proposed techniques on a range of benchmarks to demonstrate the effectiveness of the proposed framework.", "title": "" }, { "docid": "a58cbbff744568ae7abd2873d04d48e9", "text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.", "title": "" }, { "docid": "acf514a4aa34487121cc853e55ceaed4", "text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.", "title": "" }, { "docid": "3a95b876619ce4b666278810b80cae77", "text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.", "title": "" }, { "docid": "64c44342abbce474e21df67c0a5cc646", "text": "In this paper it is shown that the principal eigenvector is a necessary representation of the priorities derived from a positive reciprocal pairwise comparison judgment matrix A 1⁄4 ðaijÞ when A is a small perturbation of a consistent matrix. When providing numerical judgments, an individual attempts to estimate sequentially an underlying ratio scale and its equivalent consistent matrix of ratios. Near consistent matrices are essential because when dealing with intangibles, human judgment is of necessity inconsistent, and if with new information one is able to improve inconsistency to near consistency, then that could improve the validity of the priorities of a decision. In addition, judgment is much more sensitive and responsive to large rather than to small perturbations, and hence once near consistency is attained, it becomes uncertain which coefficients should be perturbed by small amounts to transform a near consistent matrix to a consistent one. If such perturbations were forced, they could be arbitrary and thus distort the validity of the derived priority vector in representing the underlying decision. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "6e4798c01a0a241d1f3746cd98ba9421", "text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.", "title": "" }, { "docid": "49387b129347f7255bf77ad9cc726275", "text": "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the “long tail” of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling.", "title": "" }, { "docid": "ed351364658a99d4d9c10dd2b9be3c92", "text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.", "title": "" }, { "docid": "0b705fc98638cf042e84417849259074", "text": "G et al. [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. CORC Technical Report TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York.] recently proposed a choice-based deterministic linear programming model (CDLP) for network revenue management (RM) that parallels the widely used deterministic linear programming (DLP) model. While they focused on analyzing “flexible products”—a situation in which the provider has the flexibility of using a collection of products (e.g., different flight times and/or itineraries) to serve the same market demand (e.g., an origin-destination connection)—their approach has broader implications for understanding choice-based RM on a network. In this paper, we explore the implications in detail. Specifically, we characterize optimal offer sets (sets of available network products) by extending to the network case a notion of “efficiency” developed by Talluri and van Ryzin [Talluri, K. T., G. J. van Ryzin. 2004. Revenue management under a general discrete choice model of consumer behavior. Management Sci. 50 15–33.] for the single-leg, choice-based RM problem. We show that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy. This analysis suggests that efficiency is a potentially useful approach for identifying “good” offer sets on networks, as it is in the case of single-leg problems. Second, we propose a practical decomposition heuristic for converting the static CDLP solution into a dynamic control policy. The heuristic is quite similar to the familiar displacement-adjusted virtual nesting (DAVN) approximation used in traditional network RM, and it significantly improves on the performance of the static LP solution. We illustrate the heuristic on several numerical examples.", "title": "" }, { "docid": "7feda29a5edf6855895f91f80c3286a4", "text": "The ability to conduct logical reasoning is a fundamental aspect of intelligent behavior, and thus an important problem along the way to human-level artificial intelligence. Traditionally, symbolic logic-based methods from the field of knowledge representation and reasoning have been used to equip agents with capabilities that resemble human logical reasoning qualities. More recently, however, there has been an increasing interest in using machine learning rather than symbolic logic-based formalisms to tackle these tasks. In this paper, we employ state-of-the-art methods for training deep neural networks to devise a novel model that is able to learn how to effectively perform logical reasoning in the form of basic ontology reasoning. This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems. We present the outcomes of several experiments, which show that our model learned to perform precise ontology reasoning on diverse and challenging tasks. Furthermore, it turned out that the suggested approach suffers much less from different obstacles that prohibit logic-based symbolic reasoning, and, at the same time, is surprisingly plausible from a biological point of view.", "title": "" }, { "docid": "9409922d01a00695745939b47e6446a0", "text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.", "title": "" }, { "docid": "7065db83dbe470f430789ea8e464bd04", "text": "A compact multiband antenna is proposed that consists of a printed circular disc monopole antenna with an L-shaped slot cut out of the ground, forming a defected ground plane. Analysis of the current distribution on the antenna reveals that at low frequencies the addition of the slot creates two orthogonal current paths, which are responsible for two additional resonances in the response of the antenna. By virtue of the orthogonality of these modes the antenna exhibits orthogonal pattern diversity, while enabling the adjacent resonances to be merged, forming a wideband low-frequency response and maintaining the inherent wideband high-frequency response of the monopole. The antenna exhibits a measured -10 dB S 11 bandwidth of 600 MHz from 2.68 to 3.28 GHz, and a bandwidth of 4.84 GHz from 4.74 to 9.58 GHz, while the total size of the antenna is only 24 times 28.3 mm. The efficiency is measured using a modified Wheeler cap method and is verified using the gain comparison method to be approximately 90% at both 2.7 and 5.5 GHz.", "title": "" } ]
scidocsrr
7aaf3034d741d2c7cb4a68eb671b5415
Deep Facial Expression Recognition: A Survey
[ { "docid": "22b1c8d3c67ee28dca51a90021d42604", "text": "NTechLAB facenx_large Google FaceNet v8 Beijing Faceall Co. FaceAll_Norm_1600 Beijing Faceall Co. FaceAll_1600 large 73.300% 70.496% 64.803% 63.977% 85.081% 86.473% 67.118% 63.960% Barebones_FR cnn NTechLAB facenx_small 3DiVi Company – tdvm6 small 59.363% 58.218% 33.705% 59.036% 66.366% 36.927% model AModel BModel C(Proposed) small 41.863% 57.175% 65.234% 41.297% 69.897% 76.516% Method Protocol Identification Acc. (Set 1) Verification Acc. (Set 1) For generic object, scene or action recognition. The deeply learned features need to be separable. Because the classes of the possible testing samples are within the training set, the predicted labels dominate the performance.", "title": "" } ]
[ { "docid": "0fba05a38cb601a1b08e6105e6b949c1", "text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.", "title": "" }, { "docid": "9b5e3212ae9bfa3df246ab403aee2d7d", "text": "We consider estimation in a particular semiparametric regression model for the mean of a counting process with “panel count” data. The basic model assumption is that the conditional mean function of the counting process is of the form E{N(t)|Z} = exp(β 0 Z)Λ0(t) where Z is a vector of covariates and Λ0 is the baseline mean function. The “panel count” observation scheme involves observation of the counting process N for an individual at a random number K of random time points; both the number and the locations of these time points may differ across individuals. We study semiparametric maximum pseudo-likelihood and maximum likelihood estimators of the unknown parameters (β0,Λ0) derived on the basis of a nonhomogeneous Poisson process assumption. The pseudo-likelihood estimator is fairly easy to compute, while the maximum likelihood estimator poses more challenges from the computational perspective. We study asymptotic properties of both estimators assuming that the proportional mean model holds, but dropping the Poisson process assumption used to derive the estimators. In particular we establish asymptotic normality for the estimators of the regression parameter β0 under appropriate hypotheses. The results show that our estimation procedures are robust in the sense that the estimators converge to the truth regardless of the underlying counting process.", "title": "" }, { "docid": "f70ff7f71ff2424fbcfea69d63a19de0", "text": "We propose a method for learning similaritypreserving hash functions that map highdimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "title": "" }, { "docid": "c3271548bf0c90541153e629dc298d61", "text": "A number of recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large dataset can be adopted as a universal image descriptor, and that doing so leads to impressive performance at a range of image classification tasks. Most of these studies, if not all, adopt activations of the fully-connected layer of a DCNN as the image or region representation and it is believed that convolutional layer activations are less discriminative. This paper, however, advocates that if used appropriately, convolutional layer activations constitute a powerful image representation. This is achieved by adopting a new technique proposed in this paper called cross-convolutional-layer pooling. More specifically, it extracts subarrays of feature maps of one convolutional layer as local features, and pools the extracted features with the guidance of the feature maps of the successive convolutional layer. Compared with existing methods that apply DCNNs in the similar local feature setting, the proposed method avoids the input image style mismatching issue which is usually encountered when applying fully connected layer activations to describe local regions. Also, the proposed method is easier to implement since it is codebook free and does not have any tuning parameters. By applying our method to four popular visual classification tasks, it is demonstrated that the proposed method can achieve comparable or in some cases significantly better performance than existing fully-connected layer based image representations.", "title": "" }, { "docid": "e6a6d7d4304fe14798597fbd5eae7ba5", "text": "BACKGROUND\nA significant proportion of trauma survivors experience an additional critical life event in the aftermath. These renewed experiences of traumatic and stressful life events may lead to an increase in trauma-related mental health symptoms.\n\n\nMETHOD\nIn a longitudinal study, the effects of renewed experiences of a trauma or stressful life event were examined. For this purpose, refugees seeking asylum in Germany were assessed for posttraumatic stress symptoms (PTS), Posttraumatic Stress Diagnostic Scale (PDS), anxiety, and depression (Hopkins Symptom Checklist [HSCL-25]) before treatment start as well as after 6 and 12 months during treatment (N=46). Stressful life events and traumatic events were recorded monthly. If a new event happened, PDS and HSCL were additionally assessed directly afterwards. Mann-Whitney U-tests were performed to calculate the differences between the group that experienced an additional critical event (stressful vs. trauma) during treatment (n=23) and the group that did not (n=23), as well as differences within the critical event group between the stressful life event group (n=13) and the trauma group (n=10).\n\n\nRESULTS\nRefugees improved significantly during the 12-month period of our study, but remained severely distressed. In a comparison of refugees with a new stressful life event or trauma, significant increases in PTS, anxiety, and depressive symptoms were found directly after the experience, compared to the group without a renewed event during the 12 months of treatment. With regard to the different critical life events (stressful vs. trauma), no significant differences were found regarding overall PTS, anxiety, and depression symptoms. Only avoidance symptoms increased significantly in the group experiencing a stressful life event.\n\n\nCONCLUSION\nAlthough all clinicians should be aware of possible PTS symptom reactivation, especially those working with refugees and asylum seekers, who often experience new critical life events, should understand symptom fluctuation and address it in treatment.", "title": "" }, { "docid": "26095dbc82b68c32881ad9316256bc42", "text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.", "title": "" }, { "docid": "c86e4bf0577f49d6d4384379651c7d9a", "text": "The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.", "title": "" }, { "docid": "06ef397d13383ff09f2f6741c0626192", "text": "A fully-integrated low-dropout regulator (LDO) with fast transient response and full spectrum power supply rejection (PSR) is proposed to provide a clean supply for noise-sensitive building blocks in wideband communication systems. With the proposed point-of-load LDO, chip-level high-frequency glitches are well attenuated, consequently the system performance is improved. A tri-loop LDO architecture is proposed and verified in a 65 nm CMOS process. In comparison to other fully-integrated designs, the output pole is set to be the dominant pole, and the internal poles are pushed to higher frequencies with only 50 μA of total quiescent current. For a 1.2 V input voltage and 1 V output voltage, the measured undershoot and overshoot is only 43 mV and 82 mV, respectively, for load transient of 0 μA to 10 mA within edge times of 200 ps. It achieves a transient response time of 1.15 ns and the figure-of-merit (FOM) of 5.74 ps. PSR is measured to be better than -12 dB over the whole spectrum (DC to 20 GHz tested). The prototype chip measures 260×90 μm2, including 140 pF of stacked on-chip capacitors.", "title": "" }, { "docid": "85d9b0ed2e9838811bf3b07bb31dbeb6", "text": "In recent years, the medium which has negative index of refraction is widely researched. The medium has both the negative permittivity and the negative permeability. In this paper, we have researched the frequency range widening of negative permeability using split ring resonators.", "title": "" }, { "docid": "2bf619a1af1bab48b4b6f57df8f29598", "text": "Alcoholism and drug addiction have marked impacts on the ability of families to function. Much of the literature has been focused on adult members of a family who present with substance dependency. There is limited research into the effects of adolescent substance dependence on parenting and family functioning; little attention has been paid to the parents' experience. This qualitative study looks at the parental perspective as they attempted to adapt and cope with substance dependency in their teenage children. The research looks into family life and adds to family functioning knowledge when the identified client is a youth as opposed to an adult family member. Thirty-one adult caregivers of 21 teenagers were interviewed, resulting in eight significant themes: (1) finding out about the substance dependence problem; (2) experiences as the problems escalated; (3) looking for explanations other than substance dependence; (4) connecting to the parent's own history; (5) trying to cope; (6) challenges of getting help; (7) impact on siblings; and (8) choosing long-term rehabilitation. Implications of this research for clinical practice are discussed.", "title": "" }, { "docid": "34993e22f91f3d5b31fe0423668a7eb1", "text": "K-means as a clustering algorithm has been studied in intrusion detection. However, with the deficiency of global search ability it is not satisfactory. Particle swarm optimization (PSO) is one of the evolutionary computation techniques based on swarm intelligence, which has high global search ability. So K-means algorithm based on PSO (PSO-KM) is proposed in this paper. Experiment over network connection records from KDD CUP 1999 data set was implemented to evaluate the proposed method. A Bayesian classifier was trained to select some fields in the data set. The experimental results clearly showed the outstanding performance of the proposed method", "title": "" }, { "docid": "8310851d5115ec570953a8c4a1757332", "text": "We present a global optimization approach for mapping color images onto geometric reconstructions. Range and color videos produced by consumer-grade RGB-D cameras suffer from noise and optical distortions, which impede accurate mapping of the acquired color data to the reconstructed geometry. Our approach addresses these sources of error by optimizing camera poses in tandem with non-rigid correction functions for all images. All parameters are optimized jointly to maximize the photometric consistency of the reconstructed mapping. We show that this optimization can be performed efficiently by an alternating optimization algorithm that interleaves analytical updates of the color map with decoupled parameter updates for all images. Experimental results demonstrate that our approach substantially improves color mapping fidelity.", "title": "" }, { "docid": "5b675ea7554dc8bf1707ecb4c4055de7", "text": "Researchers have highlighted that the main factors that contribute to IT service failure are the people, process and technology. However, relatively few empirical studies examine to what degree these factors contribute to service disruptions in the public sector organizations. This study explores the IT service management (ITSM) at eight (8) Front-end Agencies, four (4) Ministries and six (6) Departments in Malaysian public service to identify the level of contribution of each factor to the public IT service disruptions. This study was undertaken using questionnaires via stratified sampling. The empirical results reveal that human action, decision, management, error and failure are the major causes to the IT service disruptions followed by an improper process or procedures and technology failure. In addition, we can conclude that human is an important factor and need to give more attention by the management since human is the creator, who uses, manages and maintains the technology and process to enable the delivery of services as specified in the objectives, vision and mission of the organization. Although the literature states that human failure was due to knowledge, skill, attitude and behavior of an individual and the organization environment, but no literature was found studies on what characteristics of human and environmental organizations that make up the resilience service delivery and the creation of an organization that is resilient. Future research on what characteristics on human and organization environmental that contribute to organizational and business resilience is suggested at the end of the paper. However, this paper only covers literature that discussed in depth the type of human failure and the cause of failure. Nevertheless, it is believed that the findings provide a valuable understanding of the current situation in this research field.", "title": "" }, { "docid": "7885cdfd33df957b6803d3d94c8ac212", "text": "Ground penetrating radar (GPR) is non-destructive device used for monitoring underground structures. COST Action TU1208 promoted its use outside the civil engineering applications and provided a lot of free resources to the GPR community. In this paper, we built a low-cost GPR prototype for educational purposes according to the given resources and continued the work with focus on GPR antenna design. According to the required radiation characteristics, some antenna types are thoroughly discussed, fabricated and measured.", "title": "" }, { "docid": "de6de62ab783eb1b0a9347a6fa8dcacb", "text": "The human face is among the most significant objects in an image or video, it contains many important information and specifications, also is required to be the cause of almost all achievable look variants caused by changes in scale, location, orientation, pose, facial expression, lighting conditions and partial occlusions. It plays a key role in face recognition systems and many other face analysis applications. We focus on the feature based approach because it gave great results on detect the human face. Face feature detection techniques can be mainly divided into two kinds of approaches are Feature base and image base approach. Feature base approach tries to extract features and match it against the knowledge of the facial features. This paper gives the idea about challenging problems in the field of human face analysis and as such, as it has achieved a great attention over the last few years because of its many applications in various domains. Furthermore, several existing face detection approaches are analyzed and discussed and attempt to give the issues regarding key technologies of feature base methods, we had gone direct comparisons of the method's performance are made where possible and the advantages/ disadvantages of different approaches are discussed.", "title": "" }, { "docid": "0c5c83cfb63b335b327f044973514d23", "text": "With the explosion of healthcare information, there has been a tremendous amount of heterogeneous textual medical knowledge (TMK), which plays an essential role in healthcare information systems. Existing works for integrating and utilizing the TMK mainly focus on straightforward connections establishment and pay less attention to make computers interpret and retrieve knowledge correctly and quickly. In this paper, we explore a novel model to organize and integrate the TMK into conceptual graphs. We then employ a framework to automatically retrieve knowledge in knowledge graphs with a high precision. In order to perform reasonable inference on knowledge graphs, we propose a contextual inference pruning algorithm to achieve efficient chain inference. Our algorithm achieves a better inference result with precision and recall of 92% and 96%, respectively, which can avoid most of the meaningless inferences. In addition, we implement two prototypes and provide services, and the results show our approach is practical and effective.", "title": "" }, { "docid": "77a361b4f36289f3e861cb2653b40b83", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd introduction to cryptography principles and applications as the choice of reading, you can find here.", "title": "" }, { "docid": "f95ace29fea990f496f011446d4ed88f", "text": "Deep-learning has dramatically changed the world overnight. It greatly boosted the development of visual perception, object detection, and speech recognition, etc. That was attributed to the multiple convolutional processing layers for abstraction of learning representations from massive data. The advantages of deep convolutional structures in data processing motivated the applications of artificial intelligence methods in robotic problems, especially perception and control system, the two typical and challenging problems in robotics. This paper presents a survey of the deep-learning research landscape in mobile robotics. We start with introducing the definition and development of deep-learning in related fields, especially the essential distinctions between image processing and robotic tasks. We described and discussed several typical applications and related works in this domain, followed by the benefits from deeplearning, and related existing frameworks. Besides, operation in the complex dynamic environment is regarded as a critical bottleneck for mobile robots, such as that for autonomous driving. We thus further emphasize the recent achievement on how deeplearning contributes to navigation and control systems for mobile robots. At the end, we discuss the open challenges and research frontiers.", "title": "" }, { "docid": "cacef3b17bafadd25cf9a49e826ee066", "text": "Road accidents are frequent and many cause casualties. Fast handling can minimize the number of deaths from traffic accidents. In addition to victims of traffic accidents, there are also patients who need emergency handling of the disease he suffered. One of the first help that can be given to the victim or patient is to use an ambulance equipped with medical personnel and equipment needed. The availability of ambulance and accurate information about victims and road conditions can help the first aid process for victims or patients. Supportive treatment can be done to deal with patients by determining the best route (nearest and fastest) to the nearest hospital. The best route can be known by utilizing the collaboration between the Dijkstra algorithm and the Floyd-warshall algorithm. This application applies Dijkstra's algorithm to determine the fastest travel time to the nearest hospital. The Floyd-warshall algorithm is implemented to determine the closest distance to the hospital. Data on some nearby hospitals will be collected by the system using Dijkstra's algorithm and then the system will calculate the fastest distance based on the last traffic condition using the Floyd-warshall algorithm to determine the best route to the nearest hospital recommended by the system. This application is built with the aim of providing support for the first handling process to the victim or the emergency patient by giving the ambulance calling report and determining the best route to the nearest hospital.", "title": "" }, { "docid": "7fe2479b768ce36f0ff7bd7be65b5dff", "text": "Concolic execution has achieved great success in many binary analysis tasks. However, it is still not a primary option for industrial usage. A well-known reason is that concolic execution cannot scale up to large-size programs. Many research efforts have focused on improving its scalability. Nonetheless, we find that, even when processing small-size programs, concolic execution suffers a great deal from the accuracy and scalability issues. This paper systematically investigates the challenges that can be introduced even by small-size programs, such as symbolic array and symbolic jump. We further verify that the proposed challenges are non-trivial via real-world experiments with three most popular concolic execution tools: BAP, Triton, and Angr. Among a set of 22 logic bombs we designed, Angr can solve only four cases correctly, while BAP and Triton perform much worse. The results imply that current tools are still primitive for practical industrial usage. We summarize the reasons and release the bombs as open source to facilitate further study.", "title": "" } ]
scidocsrr
525bf561c18e12c30e170d2e85d816fd
Comparative Testing of Face Detection Algorithms
[ { "docid": "2fcf4c56da05a86f50b3e5d0c9f33c70", "text": "The localization of human faces in digital images is a fundamental step in the process of face recognition. This paper presents a shape comparison approach to achieve fast, accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on grayscale still images. The Hausdorff distance is used as a similarity measure between a general face model and possible instances of the object within the image. The paper describes an efficient implementation, making this approach suitable for real-time applications. A two-step process that allows both coarse detection and exact localization of faces is presented. Experiments were performed on a large test set base and rated with a new validation measurement. c © In Proc. Third International Conference on Audioand Video-based Biometric Person Authentication, Springer, Lecture Notes in Computer Science, LNCS-2091, pp. 90–95, Halmstad, Sweden, 6–8 June 2001.", "title": "" }, { "docid": "1407b7bd4f597dd64642150629349e5e", "text": "This paper presents a general trainable framework for object detection in static images of cluttered scenes. The detection technique we develop is based on a wavelet representation of an object class derived from a statistical analysis of the class instances. By learning an object class in terms of a subset of an overcomplete dictionary of wavelet basis functions, we derive a compact representation of an object class which is used as an input to a suppori vector machine classifier. This representation overcomes both the problem of in-class variability and provides a low false detection rate in unconstrained environments. We demonstrate the capabilities of the technique i n two domains whose inherent information content differs significantly. The first system is face detection and the second is the domain of people which, in contrast to faces, vary greatly in color, texture, and patterns. Unlike previous approaches, this system learns from examples and does not rely on any a priori (handcrafted) models or motion-based segmentation. The paper also presents a motion-based extension to enhance the performance of the detection algorithm over video sequences. The results presented here suggest that this architecture may well be quite general.", "title": "" } ]
[ { "docid": "0b414748e079542d9dd870e1e8708daa", "text": "Plug-in hybrid electric vehicles (PHEVs) have emerged as an important tool in reducing greenhouse gas emissions, due to their lower dependency on fossil fuel. Since, for cost efficiency, PHEVs have a limited battery capacity, they must be recharged often and especially after trips. Thus, efficient battery charging plays an important role on the success of PHEVs commercial adoption. This paper surveys the state-of-the-art of existing PHEV battery charging schemes. We classify these schemes into four classes, namely, uncontrolled, indirectly controlled, smart, and bidirectional charging, and review various existing techniques within each class. For uncontrolled charging, existing studies focus on evaluating the impact of adding variable charging load on the smart grid. Various indirectly controlled charging schemes have been proposed to control energy prices, in order to indirectly influence the charging operations. Smart charging schemes can directly control a rich set of charging parameters to achieve various performance objectives, such as minimizing power loss, maximizing operator's profit, ensuring fairness, and so on. Finally, bidirectional charging allows a PHEV to discharge energy into smart grid, such that the vehicle can act as a mobile energy source to further stabilize the grid, which is partially supplied by intermittent renewable energy sources. This survey provides a comprehensive one-stop introductory reference to quickly learn about the key features and technical challenges, addressed by existing PHEV battery charging schemes in smart grid.", "title": "" }, { "docid": "90c10466257f8b0c7d3289a319bf0fbe", "text": "This paper describes development of joint materials using only base metals (Cu and Sn) for power semiconductor assembly. The preform sheet of the joint material is made by two kinds of particles such as Cu source and Cu-Sn IMC source. Optimized ratio of Cu source: IMC source provides robust skeleton structure in joint area. The particles' mixture control (Cu density and thickness) affects stress control to eliminate cracks and delamination of the joint area. As evaluation, Thermal Cycling Test (TCT, −40°C∼+200°C, 1,000cycles) of Cu-Cu joint resulted no critical cracks / delamination / voids. We confirmed the material also applicable for attaching SiC die on the DCB substrate on bare Cu heatsink.", "title": "" }, { "docid": "bdb41d1633c603f4b68dfe0191eb822b", "text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.", "title": "" }, { "docid": "49942573c60fa910369b81c44447a9b1", "text": "Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible text sentences, whose attributes are controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of semantic structures. The model can alternatively be seen as enhancing VAEs with the wake-sleep algorithm for leveraging fake samples as extra training data. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns interpretable representations from even only word annotations, and produces short sentences with desired attributes of sentiment and tenses. Quantitative experiments using trained classifiers as evaluators validate the accuracy of sentence and attribute generation.", "title": "" }, { "docid": "3f30c821132e07838de325c4f2183f84", "text": "This paper argues for the recognition of important experiential aspects of consumption. Specifically, a general framework is constructed to represent typical consumer behavior variables. Based on this paradigm, the prevailing information processing model is contrasted with an experiential view that focuses on the symbolic, hedonic, and esthetic nature of consumption. This view regards the consumption experience as a phenomenon directed toward the pursuit of fantasies, feelings, and fun.", "title": "" }, { "docid": "8e077186aef0e7a4232eec0d8c73a5a2", "text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "465dc98560bd7b0aef6295db19b391f6", "text": "The new era of cognitive computing brings forth the grand challenge of developing systems capable of processing massive amounts of noisy multisensory data. This type of intelligent computing poses a set of constraints, including real-time operation, low-power consumption and scalability, which require a radical departure from conventional system design. Brain-inspired architectures offer tremendous promise in this area. To this end, we developed TrueNorth, a 65 mW real-time neurosynaptic processor that implements a non-von Neumann, low-power, highly-parallel, scalable, and defect-tolerant architecture. With 4096 neurosynaptic cores, the TrueNorth chip contains 1 million digital neurons and 256 million synapses tightly interconnected by an event-driven routing infrastructure. The fully digital 5.4 billion transistor implementation leverages existing CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. With such aggressive design metrics and the TrueNorth architecture breaking path with prevailing architectures, it is clear that conventional computer-aided design (CAD) tools could not be used for the design. As a result, we developed a novel design methodology that includes mixed asynchronous-synchronous circuits and a complete tool flow for building an event-driven, low-power neurosynaptic chip. The TrueNorth chip is fully configurable in terms of connectivity and neural parameters to allow custom configurations for a wide range of cognitive and sensory perception applications. To reduce the system's communication energy, we have adapted existing application-agnostic very large-scale integration CAD placement tools for mapping logical neural networks to the physical neurosynaptic core locations on the TrueNorth chips. With that, we have successfully demonstrated the use of TrueNorth-based systems in multiple applications, including visual object recognition, with higher performance and orders of magnitude lower power consumption than the same algorithms run on von Neumann architectures. The TrueNorth chip and its tool flow serve as building blocks for future cognitive systems, and give designers an opportunity to develop novel brain-inspired architectures and systems based on the knowledge obtained from this paper.", "title": "" }, { "docid": "82b03b45a093fb6342e92602c437741b", "text": "Human-like path planning is still a challenging task for automated vehicles. Imitation learning can teach these vehicles to learn planning from human demonstration. In this work, we propose to formulate the planning stage as a convolutional neural network (CNN). Thus, we can employ well established CNN techniques to learn planning from imitation. With the proposed method, we train a network for planning in complex traffic situations from both simulated and real world data. The resulting planning network exhibits human-like path generation.", "title": "" }, { "docid": "b255a513fe6140fc9534087563efb36e", "text": "Traditional decision tree classifiers work with data whose values are known and precise. We extend such classifiers to handle data with uncertain information. Value uncertainty arises in many applications during the data collection process. Example sources of uncertainty include measurement/quantization errors, data staleness, and multiple repeated measurements. With uncertainty, the value of a data item is often represented not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the \"complete information\" of a data item (taking into account the probability density function (pdf)) is utilized. We extend classical decision tree building algorithms to handle data tuples with uncertain values. Extensive experiments have been conducted which show that the resulting classifiers are more accurate than those using value averages. Since processing pdfs is computationally more costly than processing single values (e.g., averages), decision tree construction on uncertain data is more CPU demanding than that for certain data. To tackle this problem, we propose a series of pruning techniques that can greatly improve construction efficiency.", "title": "" }, { "docid": "e343f97f18c9cd2b52ca8abdf40051df", "text": "Due to the increased demand of animal protein in developing countries, intensive farming is instigated, which results in antibiotic residues in animal-derived products, and eventually, antibiotic resistance. Antibiotic resistance is of great public health concern because the antibiotic-resistant bacteria associated with the animals may be pathogenic to humans, easily transmitted to humans via food chains, and widely disseminated in the environment via animal wastes. These may cause complicated, untreatable, and prolonged infections in humans, leading to higher healthcare cost and sometimes death. In the said countries, antibiotic resistance is so complex and difficult, due to irrational use of antibiotics both in the clinical and agriculture settings, low socioeconomic status, poor sanitation and hygienic status, as well as that zoonotic bacterial pathogens are not regularly cultured, and their resistance to commonly used antibiotics are scarcely investigated (poor surveillance systems). The challenges that follow are of local, national, regional, and international dimensions, as there are no geographic boundaries to impede the spread of antibiotic resistance. In addition, the information assembled in this study through a thorough review of published findings, emphasized the presence of antibiotics in animal-derived products and the phenomenon of multidrug resistance in environmental samples. This therefore calls for strengthening of regulations that direct antibiotic manufacture, distribution, dispensing, and prescription, hence fostering antibiotic stewardship. Joint collaboration across the world with international bodies is needed to assist the developing countries to implement good surveillance of antibiotic use and antibiotic resistance.", "title": "" }, { "docid": "33df4246544a1847b09018cc65ffc995", "text": "In this paper, we propose a method for computing partial functional correspondence between non-rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace-Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular (in the Mumford-Shah sense) parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.", "title": "" }, { "docid": "ff9ac94a02a799e63583127ac300b0b4", "text": "Latent variable models have been widely applied for the analysis and visualization of large datasets. In the case of sequential data, closed-form inference is possible when the transition and observation functions are linear. However, approximate inference techniques are usually necessary when dealing with nonlinear dynamics and observation functions. Here, we propose a novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), that is able to uncover nonlinear observation and transition functions from sequential data. The framework includes a structured approximate posterior, and an algorithm that relies on the fixed-point iteration method to find the best estimate for latent trajectories. We apply the method to several datasets and show that it is able to accurately infer the underlying dynamics of these systems, in some cases substantially outperforming state-of-the-art methods.", "title": "" }, { "docid": "6c5a5bc775316efc278285d96107ddc6", "text": "STUDY DESIGN\nRetrospective study of 55 consecutive patients with spinal metastases secondary to breast cancer who underwent surgery.\n\n\nOBJECTIVE\nTo evaluate the predictive value of the Tokuhashi score for life expectancy in patients with breast cancer with spinal metastases.\n\n\nSUMMARY OF BACKGROUND DATA\nThe score, composed of 6 parameters each rated from 0 to 2, has been proposed by Tokuhashi and colleagues for the prognostic assessment of patients with spinal metastases.\n\n\nMETHODS\nA total of 55 patients surgically treated for vertebral metastases secondary to breast cancer were studied. The score was calculated for each patient and, according to Tokuhashi, the patients were divided into 3 groups with different life expectancy according to their total number of scoring points. In a second step, the grouping for prognosis was modified to get a better correlation of the predicted and definitive survival.\n\n\nRESULTS\nApplying the Tokuhashi score for the estimation of life expectancy of patients with breast cancer with vertebral metastases provided very reliable results. However, the original analysis by Tokuhashi showed a limited correlation between predicted and real survival for each prognostic group. Therefore, our patients were divided into modified prognostic groups regarding their total number of scoring points, leading to a higher significance of the predicted prognosis in each group (P < 0.0001), and a better correlation of the predicted and real survival.\n\n\nCONCLUSION\nThe modified Tokuhashi score assists in decision making based on reliable estimators of life expectancy in patients with spinal metastases secondary to breast cancer.", "title": "" }, { "docid": "80ca7c9b05fa6eefdf8053226f650533", "text": "In this paper we describe LaNCoA, Language Networks Construction and Analysis toolkit implemented in Python. The toolkit provides various procedures for network construction from the text: on the word-level (co-occurrence networks, syntactic networks, shuffled networks), and on the subword-level (syllable networks, grapheme networks). Furthermore, we implement functions for the language networks analysis on the global and local level. The toolkit is organized in several modules that enable various aspects of language analysis: analysis of global network measures for different co-occurrence window, comparison of networks based on original and shuffled texts, comparison of networks constructed on different language levels, etc. Text manipulation methods, like corpora cleaning, lemmatization and stopwords removal, are also implemented. For the basic network representation we use available NetworkX functions and methods. However, language network analysis is specific and it requires implementation of additional functions and methods. That was the main motivation for this research.", "title": "" }, { "docid": "b6376259827dfc04f7c7c037631443f3", "text": "In this brief, a low-power flip-flop (FF) design featuring an explicit type pulse-triggered structure and a modified true single phase clock latch based on a signal feed-through scheme is presented. The proposed design successfully solves the long discharging path problem in conventional explicit type pulse-triggered FF (P-FF) designs and achieves better speed and power performance. Based on post-layout simulation results using TSMC CMOS 90-nm technology, the proposed design outperforms the conventional P-FF design data-close-to-output (ep-DCO) by 8.2% in data-to-Q delay. In the mean time, the performance edges on power and power- delay-product metrics are 22.7% and 29.7%, respectively.", "title": "" }, { "docid": "42908bdaa9e72da204630d2ac25ed830", "text": "We propose FINET, a system for detecting the types of named entities in short inputs—such as sentences or tweets—with respect to WordNet’s super fine-grained type system. FINET generates candidate types using a sequence of multiple extractors, ranging from explicitly mentioned types to implicit types, and subsequently selects the most appropriate using ideas from word-sense disambiguation. FINET combats data scarcity and noise from existing systems: It does not rely on supervision in its extractors and generates training data for type selection from WordNet and other resources. FINET supports the most fine-grained type system so far, including types with no annotated training data. Our experiments indicate that FINET outperforms state-of-the-art methods in terms of recall, precision, and granularity of extracted types.", "title": "" }, { "docid": "03b2876a4b62a6e10e8523cccc32452a", "text": "Millions of people regularly report the details of their real-world experiences on social media. This provides an opportunity to observe the outcomes of common and critical situations. Identifying and quantifying these outcomes may provide better decision-support and goal-achievement for individuals, and help policy-makers and scientists better understand important societal phenomena. We address several open questions about using social media data for open-domain outcome identification: Are the words people are more likely to use after some experience relevant to this experience? How well do these words cover the breadth of outcomes likely to occur for an experience? What kinds of outcomes are discovered? Studying 3-months of Twitter data capturing people who experienced 39 distinct situations across a variety of domains, we find that these outcomes are generally found to be relevant (55-100% on average) and that causally related concepts are more likely to be discovered than conceptual or semantically related concepts.", "title": "" }, { "docid": "0b4cc0182fba2ca580e44beee5c35f8f", "text": "A good user experience depends on predictable performance within the data-center network.", "title": "" }, { "docid": "16ff5b993508f962550b6de495c9d651", "text": "Finding similar procedures in stripped binaries has various use cases in the domains of cyber security and intellectual property. Previous works have attended this problem and came up with approaches that either trade throughput for accuracy or address a more relaxed problem.\n In this paper, we present a cross-compiler-and-architecture approach for detecting similarity between binary procedures, which achieves both high accuracy and peerless throughput. For this purpose, we employ machine learning alongside similarity by composition: we decompose the code into smaller comparable fragments, transform these fragments to vectors, and build machine learning-based predictors for detecting similarity between vectors that originate from similar procedures.\n We implement our approach in a tool called Zeek and evaluate it by searching similarities in open source projects that we crawl from the world-wide-web. Our results show that we perform 250X faster than state-of-the-art tools without harming accuracy.", "title": "" } ]
scidocsrr
29e4dfe1f2a849a12927791da1ee8090
Unsupervised P2P Rental Recommendations via Integer Programming
[ { "docid": "df0ffd3067abe08a61855f450519086c", "text": "Traditional recommendation algorithms often select products with the highest predicted ratings to recommend. However, earlier research in economics and marketing indicates that a consumer usually makes purchase decision(s) based on the product's marginal net utility (i.e., the marginal utility minus the product price). Utility is defined as the satisfaction or pleasure user u gets when purchasing the corresponding product. A rational consumer chooses the product to purchase in order to maximize the total net utility. In contrast to the predicted rating, the marginal utility of a product depends on the user's purchase history and changes over time. According to the Law of Diminishing Marginal Utility, many products have the decreasing marginal utility with the increase of purchase count, such as cell phones, computers, and so on. Users are not likely to purchase the same or similar product again in a short time if they already purchased it before. On the other hand, some products, such as pet food, baby diapers, would be purchased again and again.\n To better match users' purchase decisions in the real world, this paper explores how to recommend products with the highest marginal net utility in e-commerce sites. Inspired by the Cobb-Douglas utility function in consumer behavior theory, we propose a novel utility-based recommendation framework. The framework can be utilized to revamp a family of existing recommendation algorithms. To demonstrate the idea, we use Singular Value Decomposition (SVD) as an example and revamp it with the framework. We evaluate the proposed algorithm on an e-commerce (shop.com) data set. The new algorithm significantly improves the base algorithm, largely due to its ability to recommend both products that are new to the user and products that the user is likely to re-purchase.", "title": "" }, { "docid": "f0f47ce0fc361740aedf17d6d2061e03", "text": "In supervised learning scenarios, feature selection has be en studied widely in the literature. Selecting features in unsupervis ed learning scenarios is a much harder problem, due to the absence of class la bel that would guide the search for relevant information. And, almos t all of previous unsupervised feature selection methods are “wrapper ” techniques that require a learning algorithm to evaluate the candidate fe ture subsets. In this paper, we propose a “filter” method for feature select ion which is independent of any learning algorithm. Our method can be per formed in either supervised or unsupervised fashion. The proposed me thod is based on the observation that, in many real world classification pr oblems, data from the same class are often close to each other. The importa nce of a feature is evaluated by its power of locality preserving, or , Laplacian Score. We compare our method with data variance (unsupervised) an d Fisher score (supervised) on two data sets. Experimental re sults demonstrate the effectiveness and efficiency of our algorithm.", "title": "" }, { "docid": "c36dac0c410570e84bf8634b32a0cac3", "text": "The design of strategies for branching in Mixed Integer Programming (MIP) is guided by cycles of parameter tuning and offline experimentation on an extremely heterogeneous testbed, using the average performance. Once devised, these strategies (and their parameter settings) are essentially input-agnostic. To address these issues, we propose a machine learning (ML) framework for variable branching in MIP. Our method observes the decisions made by Strong Branching (SB), a time-consuming strategy that produces small search trees, collecting features that characterize the candidate branching variables at each node of the tree. Based on the collected data, we learn an easy-to-evaluate surrogate function that mimics the SB strategy, by means of solving a learning-to-rank problem, common in ML. The learned ranking function is then used for branching. The learning is instance-specific, and is performed on-the-fly while executing a branch-and-bound search to solve the instance. Experiments on benchmark instances indicate that our method produces significantly smaller search trees than existing heuristics, and is competitive with a state-of-the-art commercial solver.", "title": "" }, { "docid": "a5a7e3fe9d6eaf8fc25e7fd91b74219e", "text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.", "title": "" } ]
[ { "docid": "61185af23da5d0138eef58ab62cd0e72", "text": "BACKGROUND\nEarlobe tears and disfigurement often result from prolonged pierced earring use and trauma. They are a common cosmetic complaint for which surgical reconstruction has often been advocated.\n\n\nMATERIALS AND METHODS\nA series of 10 patients with earlobe tears or disfigurement treated using straight-line closure, carbon dioxide (CO2 ) laser ablation, or both are described. A succinct literature review of torn earlobe repair is provided.\n\n\nRESULTS\nSuccessful repair with excellent cosmesis of torn and disfigured earlobes was obtained after straight-line surgical closure, CO2 laser ablation, or both.\n\n\nCONCLUSION\nA minimally invasive earlobe repair technique that involves concomitant surgical closure and CO2 laser skin vaporization produces excellent cosmetic results for torn or disfigured earlobes.", "title": "" }, { "docid": "69102c54448921bfbc63c007cc927b8d", "text": "Multi-goal reinforcement learning (MGRL) addresses tasks where the desired goal state can change for every trial. State-of-the-art algorithms model these problems such that the reward formulation depends on the goals, to associate them with high reward. This dependence introduces additional goal reward resampling steps in algorithms like Hindsight Experience Replay (HER) that reuse trials in which the agent fails to reach the goal by recomputing rewards as if reached states were psuedo-desired goals. We propose a reformulation of goal-conditioned value functions for MGRL that yields a similar algorithm, while removing the dependence of reward functions on the goal. Our formulation thus obviates the requirement of reward-recomputation that is needed by HER and its extensions. We also extend a closely related algorithm, Floyd-Warshall Reinforcement Learning, from tabular domains to deep neural networks for use as a baseline. Our results are competitive with HER while substantially improving sampling efficiency in terms of reward computation.", "title": "" }, { "docid": "3cfc860fde33aa93840358a6764a73a2", "text": "Renal cysts are commonly encountered in clinical practice. Although most cysts found on routine imaging studies are benign, there must be an index of suspicion to exclude a neoplastic process or the presence of a multicystic disorder. This article focuses on the more common adult cystic diseases, including simple and complex renal cysts, autosomal-dominant polycystic kidney disease, and acquired cystic kidney disease.", "title": "" }, { "docid": "a81004b3fc39a66d93811841c6d42ff0", "text": "Failing to properly isolate components in the same address space has resulted in a substantial amount of vulnerabilities. Enforcing the least privilege principle for memory accesses can selectively isolate software components to restrict attack surface and prevent unintended cross-component memory corruption. However, the boundaries and interactions between software components are hard to reason about and existing approaches have failed to stop attackers from exploiting vulnerabilities caused by poor isolation. We present the secure memory views (SMV) model: a practical and efficient model for secure and selective memory isolation in monolithic multithreaded applications. SMV is a third generation privilege separation technique that offers explicit access control of memory and allows concurrent threads within the same process to partially share or fully isolate their memory space in a controlled and parallel manner following application requirements. An evaluation of our prototype in the Linux kernel (TCB < 1,800 LOC) shows negligible runtime performance overhead in real-world applications including Cherokee web server (< 0.69%), Apache httpd web server (< 0.93%), and Mozilla Firefox web browser (< 1.89%) with at most 12 LOC changes.", "title": "" }, { "docid": "3bd62709eb49e1513daadec561eb9831", "text": "This paper proposes a current-fed LLC resonant converter that is able to achieve high efficiency over a wide input voltage range. It is derived by integrating a two-phase interleaved boost circuit and a full-bridge LLC circuit together by virtue of sharing the same full-bridge switching unit. Compared with conventional full-bridge LLC converter, the gain characteristic is improved in terms of both gain range and optimal operation area, fixed-frequency pulsewidth-modulated (PWM) control is employed to achieve output voltage regulation, and the input current ripple is minimized as well. The voltage across the turned-off primary-side switch can be always clamped by the bus voltage, reducing the switch voltage stress. Besides, its other distinct features, such as single-stage configuration, and soft switching for all switches also contribute to high power conversion efficiency. The operation principles are presented, and then the main characteristics regarding gain, input current ripple, and zero-voltage switching (ZVS) considering the nonlinear output capacitance of MOSFET are investigated and compared with conventional solutions. Also, the design procedure for some key parameters is presented, and two kinds of interleaved boost integrated resonant converter topologies are generalized. Finally, experimental results of a converter prototype with 120-240 V input and 24 V/25 A output verify all considerations.", "title": "" }, { "docid": "daa7db6183c0ca7b90834dba7467c647", "text": "Accurate prediction of rainfall distribution in landfalling tropical cyclones (LTCs) is very important to disaster prevention but quite challenging to operational forecasters. This chapter will describe the rainfall distribution in LTCs, including both axisymmetric and asymmetric distributions and their major controlling parameters, such as environmental vertical wind shear, TC intensity and motion, and coastline. In addition to the composite results from many LTC cases, several case studies are also given to illustrate the predominant factors that are key to the asymmetric rainfall distribution in LTCs. Future directions in this area and potential ways to improve the operational forecasts of rainfall distribution in LTCs are also discussed briefly.", "title": "" }, { "docid": "213ff71ab1c6ac7915f6fb365100c1f5", "text": "Action anticipation and forecasting in videos do not require a hat-trick, as far as there are signs in the context to foresee how actions are going to be deployed. Capturing these signs is hard because the context includes the past. We propose an end-to-end network for action anticipation and forecasting with memory, to both anticipate the current action and foresee the next one. Experiments on action sequence datasets show excellent results indicating that training on histories with a dynamic memory can significantly improve forecasting performance.", "title": "" }, { "docid": "9489ca5b460842d5a8a65504965f0bd5", "text": "This article, based on a tutorial the author presented at ITC 2008, is an overview and introduction to mixed-signal production test. The article focuses on the fundamental techniques and procedures in production test and explores key issues confronting the industry.", "title": "" }, { "docid": "9d3c3a3fa17f47da408be1e24d2121cc", "text": "In this letter, compact substrate integrated waveguide (SIW) power dividers are presented. Both equal and unequal power divisions are considered. A quarter-wavelength long wedge shape SIW structure is used for the power division. Direct coaxial feed is used for the input port and SIW-tomicrostrip transitions are used for the output ports. Four-way equal, unequal and an eight-way equal division power dividers are presented. The four-way and the eight-way power dividers provide -10 dB input matching bandwidth of 39.3% and 13%, respectively, at the design frequency f0 = 2.4 GHz. The main advantage of the power dividers is their compact sizes. Including the microstrip to SIW transitions, size is reduced by at least 46% compared to other reported miniaturized SIW power dividers.", "title": "" }, { "docid": "66f76354b6470a49f18300f67e47abd0", "text": "Technologies in museums often support learning goals, providing information about exhibits. However, museum visitors also desire meaningful experiences and enjoy the social aspects of museum-going, values ignored by most museum technologies. We present ArtLinks, a visualization with three goals: helping visitors make connections to exhibits and other visitors by highlighting those visitors who share their thoughts; encouraging visitors' reflection on the social and liminal aspects of museum-going and their expectations of technology in museums; and doing this with transparency, aligning aesthetically pleasing elements of the design with the goals of connection and reflection. Deploying ArtLinks revealed that people have strong expectations of technology as an information appliance. Despite these expectations, people valued connections to other people, both for their own sake and as a way to support meaningful experience. We also found several of our design choices in the name of transparency led to unforeseen tradeoffs between the social and the liminal.", "title": "" }, { "docid": "d33b2e5883b14ac771cf128d309eddbf", "text": "Automated lip reading is the process of converting movements of the lips, face and tongue to speech in real time with enhanced accuracy. Although performance of lip reading systems is still not remotely similar to audio speech recognition, recent developments in processor technology and the massive explosion and ubiquity of computing devices accompanied with increased research in this field has reduced the ambiguities of the labial language, making it possible for free speech-to-text conversion. This paper surveys the field of lip reading and provides a detailed discussion of the trade-offs between various approaches. It gives a reverse chronological topic wise listing of the developments in lip reading systems in recent years. With advancement in computer vision and pattern recognition tools, the efficacy of real time, effective conversion has increased. The major goal of this paper is to provide a comprehensive reference source for the researchers involved in lip reading, not just for the esoteric academia but all the people interested in this field regardless of particular application areas.", "title": "" }, { "docid": "79574c304675e0ec1a2282027c9fc7c6", "text": "The metaphoric mapping theory suggests that abstract concepts, like time, are represented in terms of concrete dimensions such as space. This theory receives support from several lines of research ranging from psychophysics to linguistics and cultural studies; especially strong support comes from recent response time studies. These studies have reported congruency effects between the dimensions of time and space indicating that time evokes spatial representations that may facilitate or impede responses to words with a temporal connotation. The present paper reports the results of three linguistic experiments that examined this congruency effect when participants processed past- and future-related sentences. Response time was shorter when past-related sentences required a left-hand response and future-related sentences a right-hand response than when this mapping of time onto response hand was reversed (Experiment 1). This result suggests that participants can form time-space associations during the processing of sentences and thus this result is consistent with the view that time is mentally represented from left to right. The activation of these time-space associations, however, appears to be non-automatic as shown by the results of Experiments 2 and 3 when participants were asked to perform a non-temporal meaning discrimination task.", "title": "" }, { "docid": "a83bde310a2311fc8e045486a7961657", "text": "Radio frequency identification (RFID) of objects or people has become very popular in many services in industry, distribution logistics, manufacturing companies and goods flow systems. When RFID frequency rises into the microwave region, the tag antenna must be carefully designed to match the free space and to the following ASIC. In this paper, we present a novel folded dipole antenna with a very simple configuration. The required input impedance can be achieved easily by choosing suitable geometry parameters.", "title": "" }, { "docid": "ba2769abc859882f600e64cb14af2ac6", "text": "OBJECTIVE\nThis study measures and compares the outcome of conservative physical therapy with traction, by using magnetic resonance imaging and clinical parameters in patients presenting with low back pain caused by lumbar disc herniation.\n\n\nMETHODS\nA total of 26 patients with LDH (14F, 12M with mean aged 37 +/- 11) were enrolled in this study and 15 sessions (per day on 3 weeks) of physical therapy were applied. That included hot pack, ultrasound, electrotherapy and lumbar traction. Physical examination of the lumbar spine, severity of pain, sleeping order, patient and physician global assessment with visual analogue scale, functional disability by HAQ, Roland Disability Questionnaire, and Modified Oswestry Disability Questionnaire were assessed at baseline and at 4-6 weeks after treatment. Magnetic resonance imaging examinations were carried out before and 4-6 weeks after the treatment\n\n\nRESULTS\nAll patients completed the therapy session. There were significant reductions in pain, sleeping disturbances, patient and physician global assessment and disability scores, and significant increases in lumbar movements between baseline and follow-up periods. There were significant reductions of size of the herniated mass in five patients, and significant increase in 3 patients on magnetic resonance imaging after treatment, but no differences in other patients.\n\n\nCONCLUSIONS\nThis study showed that conventional physical therapies with lumbar traction were effective in the treatment of patient with subacute LDH. These results suggest that clinical improvement is not correlated with the finding of MRI. Patients with LDH should be monitored clinically (Fig. 3, Ref. 18).", "title": "" }, { "docid": "7eb150a364984512de830025a6e93e0c", "text": "The mobile ecosystem is characterized by a large and complex network of companies interacting with each other, directly and indirectly, to provide a broad array of mobile products and services to end-customers. With the convergence of enabling technologies, the complexity of the mobile ecosystem is increasing multifold as new actors are emerging, new relations are formed, and the traditional distribution of power is shifted. Drawing on theories of complex systems, interfirm relationships, and the creative art and science of network visualization, this paper identifies key catalysts and develops a method to effectively map the complex structure and dynamics of over 7,000 global companies and 18,000 relationships in the mobile ecosystem. Our visual approach enables decision makers to explore the complexity of interfirm relations in the mobile ecosystem, understand their firmpsilas competitive position in a network context, and identify patterns that may influence their choice of innovation strategy or business models.", "title": "" }, { "docid": "f10e086ca3791ece660ae2f0f4877916", "text": "The routine use of four-chamber screening of the fetal heart was pioneered in the early 1980s and has been shown to detect reliably mainly univentricular hearts in the fetus. Many conotruncal anomalies and ductal-dependent lesions may, however, not be detected with the four-chamber view alone and additional planes are needed. The three-vessel and tracheal (3VT) view is a transverse plane in the upper mediastinum demonstrating simultaneously the course and the connection of both the aortic and ductal arches, their relationship to the trachea and the visualization of the superior vena cava. The purpose of the article is to review the two-dimensional anatomy of this plane and the contribution of colour Doppler and to present a checklist to be achieved on screening ultrasound. Typical suspicions include the detection of abnormal vessel number, abnormal vessel size, abnormal course and alignment and abnormal colour Doppler pattern. Anomalies such as pulmonary and aortic stenosis and atresia, aortic coarctation, interrupted arch, tetralogy of Fallot, common arterial trunk, transposition of the great arteries, right aortic arch, double aortic arch, aberrant right subclavian artery, left superior vena cava are some of the anomalies showing an abnormal 3VT image. Recent studies on the comprehensive evaluation of the 3VT view and adjacent planes have shown the potential of visualizing the thymus and the left brachiocephalic vein during fetal echocardiography and in detecting additional rare conditions. National and international societies are increasingly recommending the use of this plane during routine ultrasound in order to improve prenatal detection rates of critical cardiac defects.", "title": "" }, { "docid": "9fd5e182851ff0be67e8865c336a1f77", "text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.", "title": "" }, { "docid": "84f7b499cd608de1ee7443fcd7194f19", "text": "In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.", "title": "" }, { "docid": "59b10765f9125e9c38858af901a39cc7", "text": "--------__------------------------------------__---------------", "title": "" }, { "docid": "fa7916c0afe0b18956f19b4fc8006971", "text": "INTRODUCTION\nPrevious studies demonstrated that multiple treatments using focused ultrasound can be effective as an non-invasive method for reducing unwanted localized fat deposits. The objective of the study is to investigate the safety and efficacy of this focused ultrasound device in body contouring in Asians.\n\n\nMETHOD\nFifty-three (51 females and 2 males) patients were enrolled into the study. Subjects had up to three treatment sessions with approximately 1-month interval in between treatment. Efficacy was assessed by changes in abdominal circumference, ultrasound fat thickness, and caliper fat thickness. Weight change was monitored to distinguish weight loss induced changes in these measurements. Patient questionnaire was completed after each treatment. The level of pain or discomfort, improvement in body contour and overall satisfaction were graded with a score of 1-5 (1 being the least). Any adverse effects such as erythema, pain during treatment or blistering were recorded.\n\n\nRESULT\nThe overall satisfaction amongst subjects was poor. Objective measurements by ultrasound, abdominal circumference, and caliper did not show significant difference after treatment. There is a negative correlation between the abdominal fat thickness and number of shots per treatment session.\n\n\nCONCLUSION\nFocused ultrasound is not effective for non-invasive body contouring among Southern Asians as compared with Caucasian. Such observation is likely due to smaller body figures. Design modifications can overcome this problem and in doing so, improve clinical outcome.", "title": "" } ]
scidocsrr
27868cdcf9701d4e128362e20b2f1dd8
Student Performance Prediction via Online Learning Behavior Analytics
[ { "docid": "d3b6ba3e4b8e80c3c371226d7ae6d610", "text": "Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. The purpose of the two reported cases studies is to identify alternative approaches to data analysis and to determine the validity and accuracy of a learning analytics framework and its corresponding student and learning profiles. The findings indicate that educational data for learning analytics is context specific and variables carry different meanings and can have different implications across educational institutions and area of studies. Benefits, concerns, and challenges of learning analytics are critically reflected, indicating that learning analytics frameworks need to be sensitive to idiosyncrasies of the educational institution and its stakeholders.", "title": "" } ]
[ { "docid": "6226fddb004d4e8d41b1167f61d3fcd7", "text": "We build a neural conversation system using a deep LST Seq2Seq model with an attention mechanism applied on the decoder. We further improve our system by introducing beam search and re-ranking with a Mutual Information objective function method to search for relevant and coherent responses. We find that both models achieve reasonable results after being trained on a domain-specific dataset and are able to pick up contextual information specific to the dataset. The second model, in particular, has promise with addressing the ”I don’t know” problem and de-prioritizing over-generic responses.", "title": "" }, { "docid": "54537c242bc89fbf15d9191be80c5073", "text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.", "title": "" }, { "docid": "088df7d8d71c00f7129d5249844edbc5", "text": "Intense multidisciplinary research has provided detailed knowledge of the molecular pathogenesis of Alzheimer disease (AD). This knowledge has been translated into new therapeutic strategies with putative disease-modifying effects. Several of the most promising approaches, such as amyloid-β immunotherapy and secretase inhibition, are now being tested in clinical trials. Disease-modifying treatments might be at their most effective when initiated very early in the course of AD, before amyloid plaques and neurodegeneration become too widespread. Thus, biomarkers are needed that can detect AD in the predementia phase or, ideally, in presymptomatic individuals. In this Review, we present the rationales behind and the diagnostic performances of the core cerebrospinal fluid (CSF) biomarkers for AD, namely total tau, phosphorylated tau and the 42 amino acid form of amyloid-β. These biomarkers reflect AD pathology, and are candidate markers for predicting future cognitive decline in healthy individuals and the progression to dementia in patients who are cognitively impaired. We also discuss emerging plasma and CSF biomarkers, and explore new proteomics-based strategies for identifying additional CSF markers. Furthermore, we outline the roles of CSF biomarkers in drug discovery and clinical trials, and provide perspectives on AD biomarker discovery and the validation of such markers for use in the clinic.", "title": "" }, { "docid": "982af44d0c5fc3d0bddd2804cee77a04", "text": "Coprime array offers a larger array aperture than uniform linear array with the same number of physical sensors, and has a better spatial resolution with increased degrees of freedom. However, when it comes to the problem of adaptive beamforming, the existing adaptive beamforming algorithms designed for the general array cannot take full advantage of coprime feature offered by the coprime array. In this paper, we propose a novel coprime array adaptive beamforming algorithm, where both robustness and efficiency are well balanced. Specifically, we first decompose the coprime array into a pair of sparse uniform linear subarrays and process their received signals separately. According to the property of coprime integers, the direction-of-arrival (DOA) can be uniquely estimated for each source by matching the super-resolution spatial spectra of the pair of sparse uniform linear subarrays. Further, a joint covariance matrix optimization problem is formulated to estimate the power of each source. The estimated DOAs and their corresponding power are utilized to reconstruct the interference-plus-noise covariance matrix and estimate the signal steering vector. Theoretical analyses are presented in terms of robustness and efficiency, and simulation results demonstrate the effectiveness of the proposed coprime array adaptive beamforming algorithm.", "title": "" }, { "docid": "1ba6f0efdac239fa2cb32064bb743d29", "text": "This paper presents a new method for determining efficient spatial distributions of police patrol areas. This method employs a traditional maximal covering formulation and an innovative backup covering formulation to provide alternative optimal solutions to police decision makers, and to address the lack of objective quantitative methods for police area design in the literature or in practice. This research demonstrates that operations research methods can be used in police decision making, presents a new backup coverage model that is appropriate for patrol area design, and encourages the integration of geographic information systems and optimal solution procedures. The models and methods are tested with the police geography of Dallas, TX. The optimal solutions are compared with the existing police geography, showing substantial improvement in number of incidents covered as well as total distance traveled.", "title": "" }, { "docid": "26f957036ead7173f93ec16a57097a50", "text": "The purpose of this paper is to present a direct digital manufacturing (DDM) process that is an order of magnitude faster than other DDM processes currently available. The developed process is based on a mask-image-projection-based Stereolithography process (MIP-SL), during which a Digital Micromirror Device (DMD) controlled projection light cures and cross-links liquid photopolymer resin. In order to achieve high-speed fabrication, we investigated the bottom-up projection system in the MIP-SL process. A set of techniques including film coating and the combination of two-way linear motions have been developed for the quick spreading of liquid resin into uniform thin layers. The process parameters and related settings to achieve the fabrication speed of a few seconds per layer are presented. Additionally, the hardware, software, and material setups developed for fabricating given three-dimensional (3D) digital models are presented. Experimental studies using the developed testbed have been performed to verify the effectiveness and efficiency of the presented fast MIP-SL process. The test results illustrate that the newly developed process can build a moderately sized part within minutes instead of hours that are typically required.", "title": "" }, { "docid": "7c525afc11c41e0a8ca6e8c48bdec97c", "text": "AT commands, originally designed in the early 80s for controlling modems, are still in use in most modern smartphones to support telephony functions. The role of AT commands in these devices has vastly expanded through vendor-specific customizations, yet the extent of their functionality is unclear and poorly documented. In this paper, we systematically retrieve and extract 3,500 AT commands from over 2,000 Android smartphone firmware images across 11 vendors. We methodically test our corpus of AT commands against eight Android devices from four different vendors through their USB interface and characterize the powerful functionality exposed, including the ability to rewrite device firmware, bypass Android security mechanisms, exfiltrate sensitive device information, perform screen unlocks, and inject touch events solely through the use of AT commands. We demonstrate that the AT command interface contains an alarming amount of unconstrained functionality and represents a broad attack surface on Android devices.", "title": "" }, { "docid": "ac078f78fcf0f675c21a337f8e3b6f5f", "text": "bstract. Plenoptic cameras, constructed with internal microlens rrays, capture both spatial and angular information, i.e., the full 4-D adiance, of a scene. The design of traditional plenoptic cameras ssumes that each microlens image is completely defocused with espect to the image created by the main camera lens. As a result, nly a single pixel in the final image is rendered from each microlens mage, resulting in disappointingly low resolution. A recently develped alternative approach based on the focused plenoptic camera ses the microlens array as an imaging system focused on the imge plane of the main camera lens. The flexible spatioangular tradeff that becomes available with this design enables rendering of final mages with significantly higher resolution than those from traditional lenoptic cameras. We analyze the focused plenoptic camera in ptical phase space and present basic, blended, and depth-based endering algorithms for producing high-quality, high-resolution imges. We also present our graphics-processing-unit-based impleentations of these algorithms, which are able to render full screen efocused images in real time. © 2010 SPIE and IS&T. DOI: 10.1117/1.3442712", "title": "" }, { "docid": "3a1f8a6934e45b50cbd691b5d28036b1", "text": "Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.", "title": "" }, { "docid": "38c1f6741d99ffc8ab2ab17b5b91e477", "text": "This paper reviews recent advances in radar sensor design for low-power healthcare, indoor real-time positioning and other applications of IoT. Various radar front-end architectures and digital processing methods are proposed to improve the detection performance including detection accuracy, detection range and power consumption. While many of the reported designs were prototypes for concept verification, several integrated radar systems have been demonstrated with reliable measured results with demo systems. A performance comparison of latest radar chip designs has been provided to show their features of different architectures. With great development of IoT, short-range low-power radar sensors for healthcare and indoor positioning applications will attract more and more research interests in the near future.", "title": "" }, { "docid": "88ffb30f1506bedaf7c1a3f43aca439e", "text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.", "title": "" }, { "docid": "c7631e1df773574e3640062c5fd55a01", "text": "A cloud storage system, consisting of a collection of storage servers, provides long-term storage services over the Internet. Storing data in a third party's cloud system causes serious concern over data confidentiality. General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority. We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back. The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding. We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers and robustness.", "title": "" }, { "docid": "397f6c39825a5d8d256e0cc2fbba5d15", "text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.", "title": "" }, { "docid": "f291c66ebaa6b24d858103b59de792b7", "text": "In this study, the authors investigated the hypothesis that women's sexual orientation and sexual responses in the laboratory correlate less highly than do men's because women respond primarily to the sexual activities performed by actors, whereas men respond primarily to the gender of the actors. The participants were 20 homosexual women, 27 heterosexual women, 17 homosexual men, and 27 heterosexual men. The videotaped stimuli included men and women engaging in same-sex intercourse, solitary masturbation, or nude exercise (no sexual activity); human male-female copulation; and animal (bonobo chimpanzee or Pan paniscus) copulation. Genital and subjective sexual arousal were continuously recorded. The genital responses of both sexes were weakest to nude exercise and strongest to intercourse. As predicted, however, actor gender was more important for men than for women, and the level of sexual activity was more important for women than for men. Consistent with this result, women responded genitally to bonobo copulation, whereas men did not. An unexpected result was that homosexual women responded more to nude female targets exercising and masturbating than to nude male targets, whereas heterosexual women responded about the same to both sexes at each activity level.", "title": "" }, { "docid": "d04042c81f2c2f7f762025e6b2bd9ab8", "text": "AIMS AND OBJECTIVES\nTo examine the association between trait emotional intelligence and learning strategies and their influence on academic performance among first-year accelerated nursing students.\n\n\nDESIGN\nThe study used a prospective survey design.\n\n\nMETHODS\nA sample size of 81 students (100% response rate) who undertook the accelerated nursing course at a large university in Sydney participated in the study. Emotional intelligence was measured using the adapted version of the 144-item Trait Emotional Intelligence Questionnaire. Four subscales of the Motivated Strategies for Learning Questionnaire were used to measure extrinsic goal motivation, peer learning, help seeking and critical thinking among the students. The grade point average score obtained at the end of six months was used to measure academic achievement.\n\n\nRESULTS\nThe results demonstrated a statistically significant correlation between emotional intelligence scores and critical thinking (r = 0.41; p < 0.001), help seeking (r = 0.33; p < 0.003) and peer learning (r = 0.32; p < 0.004) but not with extrinsic goal orientation (r = -0.05; p < 0.677). Emotional intelligence emerged as a significant predictor of academic achievement (β = 0.25; p = 0.023).\n\n\nCONCLUSION\nIn addition to their learning styles, higher levels of awareness and understanding of their own emotions have a positive impact on students' academic achievement. Higher emotional intelligence may lead students to pursue their interests more vigorously and think more expansively about subjects of interest, which could be an explanatory factor for higher academic performance in this group of nursing students.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe concepts of emotional intelligence are central to clinical practice as nurses need to know how to deal with their own emotions as well as provide emotional support to patients and their families. It is therefore essential that these skills are developed among student nurses to enhance the quality of their clinical practice.", "title": "" }, { "docid": "d15ce9f62f88a07db6fa427fae61f26c", "text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.", "title": "" }, { "docid": "9d2583618e9e00333d044ac53da65ceb", "text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.", "title": "" }, { "docid": "de7b16961bb4aa2001a3d0859f68e4c6", "text": "A new practical method is given for the self-calibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. The calibration method is evaluated on several sets of synthetic and real image data.", "title": "" }, { "docid": "c70e2174bc25577ccac51912be9d7233", "text": "In this paper, the bridge shape of interior permanent magnet synchronous motor (IPMSM) is designed for integrated starter and generator (ISG) which is applied in hybrid electric vehicle (HEV). Mechanical stress of rotor core which is caused by centrifugal force is the main issue when IPMSM is operated at high speed. The bridge is thin area in rotor core where is mechanically weak point and the shape of bridge significantly affects leakage flux and electromagnetic performance. Therefore, bridge should be designed considering both mechanic and electromagnetic characteristics. In the design process, we firstly find a shape of bridge has low leakage flux and mechanical stress. Next, the calculation of mechanical stress and the electromagnetic characteristics are performed by finite element analysis (FEA). The mechanical stress in rotor core is not maximized in steady high speed but dynamical high momentum. Therefore, transient FEA is necessary to consider the dynamic speed changing in real speed profile for durability experiment. Before the verification test, fatigue characteristic is investigated by using S-N curve of rotor core material. Lastly, the burst test of rotor is performed and the deformation of rotor core is compared between prototype and designed model to verify the design method.", "title": "" }, { "docid": "22c749b089f0bdd1a3296f59fa9cdfc5", "text": "Inspection of printed circuit board (PCB) has been a crucial process in the electronic manufacturing industry to guarantee product quality & reliability, cut manufacturing cost and to increase production. The PCB inspection involves detection of defects in the PCB and classification of those defects in order to identify the roots of defects. In this paper, all 14 types of defects are detected and are classified in all possible classes using referential inspection approach. The proposed algorithm is mainly divided into five stages: Image registration, Pre-processing, Image segmentation, Defect detection and Defect classification. The algorithm is able to perform inspection even when captured test image is rotated, scaled and translated with respect to template image which makes the algorithm rotation, scale and translation in-variant. The novelty of the algorithm lies in its robustness to analyze a defect in its different possible appearance and severity. In addition to this, algorithm takes only 2.528 s to inspect a PCB image. The efficacy of the proposed algorithm is verified by conducting experiments on the different PCB images and it shows that the proposed afgorithm is suitable for automatic visual inspection of PCBs.", "title": "" } ]
scidocsrr
71ab0493c8a0dc97c8ae31eac2d7c7f5
High-level synthesis of dynamic data structures: A case study using Vivado HLS
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" }, { "docid": "cd1cfbdae08907e27a4e1c51e0508839", "text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.", "title": "" } ]
[ { "docid": "8ba192226a3c3a4f52ca36587396e85c", "text": "For many years I have been engaged in psychotherapy with individuals in distress. In recent years I have found myself increasingly concerned with the process of abstracting from that experience the general principles which appear to be involved in it. I have endeavored to discover any orderliness, any unity which seems to inhere in the subtle, complex tissue of interpersonal relationship in which I have so constantly been immersed in therapeutic work. One of the current products of this concern is an attempt to state, in formal terms, a theory of psychotherapy, of personality, and of interpersonal relationships which will encompass and contain the phenomena of my experience. What I wish to do in this paper is to take one very small segment of that theory, spell it out more completely, and explore its meaning and usefulness.", "title": "" }, { "docid": "44934f07118f7ec619c7e165cdf9d797", "text": "The American Heart Association (AHA) has had a longstanding commitment to provide information about the role of nutrition in cardiovascular disease (CVD) risk reduction. Many activities have been and are currently directed toward this objective, including issuing AHA Dietary Guidelines periodically (most recently in 20001) and Science Advisories and Statements on an ongoing basis to review emerging nutrition-related issues. The objective of the AHA Dietary Guidelines is to promote healthful dietary patterns. A consistent focus since the inception of the AHA Dietary Guidelines has been to reduce saturated fat (and trans fat) and cholesterol intake, as well as to increase dietary fiber consumption. Collectively, all the AHA Dietary Guidelines have supported a dietary pattern that promotes the consumption of diets rich in fruits, vegetables, whole grains, low-fat or nonfat dairy products, fish, legumes, poultry, and lean meats. This dietary pattern has a low energy density to promote weight control and a high nutrient density to meet all nutrient needs. As reviewed in the first AHA Science Advisory2 on antioxidant vitamins, epidemiological and population studies reported that some micronutrients may beneficially affect CVD risk (ie, antioxidant vitamins such as vitamin E, vitamin C, and -carotene). Recent epidemiological evidence3 is consistent with the earlier epidemiological and population studies (reviewed in the first Science Advisory).2 These findings have been supported by in vitro studies that have established a role of oxidative processes in the development of the atherosclerotic plaque. Underlying the atherosclerotic process are proatherogenic and prothrombotic oxidative events in the artery wall that may be inhibited by antioxidants. The 1999 AHA Science Advisory2 recommended that the general population consume a balanced diet with emphasis on antioxidant-rich fruits, vegetables, and whole grains, advice that was consistent with the AHA Dietary Guidelines at the time. In the absence of data from randomized, controlled clinical trials, no recommendations were made with regard to the use of antioxidant supplements. In the past 5 years, a number of controlled clinical studies have reported the effects of antioxidant vitamin and mineral supplements on CVD risk (see Tables 1 through 3).4–21 These studies have been the subject of several recent reviews22–26 and formed the database for the present article. In general, the studies presented in the tables differ with regard to subject populations studied, type and dose of antioxidant/cocktail administered, length of study, and study end points. Overall, the studies have been conducted on post–myocardial infarction subjects or subjects at high risk for CVD, although some studied healthy subjects. In addition to dosage differences in vitamin E studies, some trials used the synthetic form, whereas others used the natural form of the vitamin. With regard to the other antioxidants, different doses were administered (eg, for -carotene and vitamin C). The antioxidant cocktail formulations used also varied. Moreover, subjects were followed up for at least 1 year and for as long as 12 years. In addition, a meta-analysis of 15 studies (7 studies of vitamin E, 50 to 800 IU; 8 studies of -carotene, 15 to 50 mg) with 1000 or more subjects per trial has been conducted to ascertain the effects of antioxidant vitamins on cardiovascular morbidity and mortality.27 Collectively, for the most part, clinical trials have failed to demonstrate a beneficial effect of antioxidant supplements on CVD morbidity and mortality. With regard to the meta-analysis, the lack of efficacy was demonstrated consistently for different doses of various antioxidants in diverse population groups. Although the preponderance of clinical trial evidence has not shown beneficial effects of antioxidant supplements, evidence from some smaller studies documents a benefit of -tocopherol (Cambridge Heart AntiOxidant Study,13 Secondary Prevention with Antioxidants of Cardiovascular disease in End-stage renal disease study),15 -tocopherol and slow-release vitamin C (Antioxidant Supplementation in Atherosclerosis Prevention study),16 and vitamin C plus vitamin E (Intravascular Ultrasonography Study)17 on cardio-", "title": "" }, { "docid": "54327e52ad52e1b7a6ead7c1afe4a6d5", "text": "Implementation of smart grid provides an opportunity for concurrent implementation of nonintrusive appliance load monitoring (NIALM), which disaggregates the total household electricity data into data on individual appliances. This paper introduces a new disaggregation algorithm for NIALM based on a modified Viterbi algorithm. This modification takes advantage of the sparsity of transitions between appliances' states to decompose the main algorithm, thus making the algorithm complexity linearly proportional to the number of appliances. By consideration of a series of data and integrating a priori information, such as the frequency of use and time on/time off statistics, the algorithm dramatically improves NIALM accuracy as compared to the accuracy of established NIALM algorithms.", "title": "" }, { "docid": "30f48021bca12899d6f2e012e93ba12d", "text": "There are several locomotion mechanisms in Nature. The study of mechanics of any locomotion is very useful for scientists and researchers. Many locomotion principles from Nature have been adapted in robotics. There are several species which are capable of multimode locomotion such as walking and swimming, and flying etc. Frogs are such species, capable of jumping, walking, and swimming. Multimode locomotion is important for robots to work in unknown environment. Frogs are widely known as good multimode locomotors. Webbed feet help them to swim efficiently in water. This paper presents the study of frog's swimming locomotion and adapting the webbed feet for swimming locomotion of the robots. A simple mechanical model of robotic leg with webbed foot, which can be used for multi-mode locomotion and robotic frog, is put forward. All the joints of the legs are designed to be driven by tendon-pulley arrangement with the actuators mounted on the body, which allows the legs to be lighter and compact.", "title": "" }, { "docid": "b715631367001fb60b4aca9607257923", "text": "This paper describes a new predictive algorithm that can be used for programming large arrays of analog computational memory elements within 0.2% of accuracy for 3.5 decades of currents. The average number of pulses required are 7-8 (20 mus each). This algorithm uses hot-electron injection for accurate programming and Fowler-Nordheim tunneling for global erase. This algorithm has been tested for programming 1024times16 and 96times16 floating-gate arrays in 0.25 mum and 0.5 mum n-well CMOS processes, respectively", "title": "" }, { "docid": "ebe14e601d0b61f10f6674e2d7108d41", "text": "In this letter, the design procedure and electrical performance of a dual band (2.4/5.8GHz) printed dipole antenna using spiral structure are proposed and investigated. For the first time, a dual band printed dipole antenna with spiral configuration is proposed. In addition, a matching method by adjusting the transmission line width, and a new bandwidth broadening method varying the distance between the top and bottom spirals are reported. The operating frequencies of the proposed antenna are 2.4GHz and 5.8GHz which cover WLAN system. The proposed antenna achieves a good matching using tapered transmission lines for the top and bottom spirals. The desired resonant frequencies are obtained by adjusting the number of turns of the spirals. The bandwidth is optimized by varying the distance between the top and bottom spirals. A relative position of the bottom spiral plays an important role in achieving a bandwidth in terms of 10-dB return loss.", "title": "" }, { "docid": "7957742cd5da5a720446ae9af185df65", "text": "Data Mining ist ein Prozess, bei dem mittels statistischer Verfahren komplexe Muster in meist großen Mengen von Daten gesucht werden. Damit dieser von Organisationen verstärkt zur Entscheidungsunterstützung eingesetzt werden kann, wäre es hilfreich, wenn Domänenexperten durch Self-Service-Anwendungen in die Lage versetzt würden, diese Form der Analysen eigenständig durchzuführen, damit sie nicht mehr auf Datenwissenschaftler und IT-Fachkräfte angewiesen sind. In diesem Artikel soll eine Versuchsreihe vorgestellt werden, die eine Bewertung darüber ermöglicht, wie geeignet etablierte Data-MiningSoftwareplattformen (IBM SPSS Modeler, KNIME, RapidMiner und WEKA) sind, um sie Gelegenheitsanwendern zur Verfügung zu stellen. In den vorgestellten Versuchen sollen Entscheidungsbäume im Fokus stehen, eine besonders einfache Form von Algorithmen, die der Literatur und unserer Erfahrung nach am ehesten für die Nutzung in Self-Service-Data-Mining-Anwendungen geeignet sind. Dabei werden mithilfe eines einheitlichen Datensets auf den verschiedenen Plattformen Entscheidungsbäume für identische Zielvariablen konstruiert. Die Ergebnisse sind im Hinblick auf die Klassifikationsgenauigkeit zwar relativ ähnlich, die Komplexität der Modelle variiert jedoch. Aktuelle grafische Benutzeroberflächen lassen sich zwar auch ohne tiefgehende Kompetenzen in den Bereichen Informatik und Statistik bedienen, sie ersetzen aber nicht den Bedarf an datenwissenschaftlichen Kompetenzen, die besonders beim Schritt der Datenvorbereitung zum Einsatz kommen, welcher den größten Teil des Data-Mining-Prozesses ausmacht.", "title": "" }, { "docid": "f782af034ef46a15d89637a43ad2849c", "text": "Introduction: Evidence-based treatment of abdominal hernias involves the use of prosthetic mesh. However, the most commonly used method of treatment of diastasis of the recti involves plication with non-absorbable sutures as part of an abdominoplasty procedure. This case report describes single-port laparoscopic repair of diastasis of recti and umbilical hernia with prosthetic mesh after plication with slowly absorbable sutures combined with abdominoplasty. Technique Description: Our patient is a 36-year-old woman with severe diastasis of the recti, umbilical hernia and an excessive amount of redundant skin after two previous pregnancies and caesarean sections. After raising the upper abdominal flap, a single-port was placed in the left upper quadrant and the ligamenturn teres was divided. The diastasis of the recti and umbilical hernia were plicated under direct vision with continuous and interrupted slowly absorbable sutures before an antiadhesive mesh was placed behind the repair with 6 cm overlap, transfixed in 4 quadrants and tacked in place with non-absorbable tacks in a double-crown technique. The left upper quadrant wound was closed with slowly absorbable sutures. The excess skin was removed and fibrin sealant was sprayed in the subcutaneous space to minimize the risk of serorna formation without using drains. Discussion: Combining single-port laparoscopic repair of diastasis of recti and umbilical hemia repair minimizes inadvertent suturing of abdominal contents during plication, the risks of port site hernias associated with conventional multipart repair and permanently reinforced the midline weakness while achieving “scarless” surgery.", "title": "" }, { "docid": "a5cb288b5a2f29c22a9338be416a27f7", "text": "L ^ N C O U R A G I N G CHILDREN'S INTRINSIC MOTIVATION CAN HELP THEM TO ACHIEVE ACADEMIC SUCCESS (ADELMAN, 1978; ADELMAN & TAYLOR, 1986; GOTTFRIED, 1 9 8 3 , 1 9 8 5 ) . TO HELP STUDENTS WITH AND WITHOUT LEARNING DISABILITIES TO DEVELOP ACADEMIC INTRINSIC MOTIVATION, IT IS IMPORTANT TO DEFINE THE FACTORS THAT AFFECT MOTIVATION (ADELMAN & CHANEY, 1 9 8 2 ; ADELMAN & TAYLOR, 1983). T H I S ARTICLE OFFERS EDUCATORS AN INSIGHT INTO THE EFFECTS OF DIFFERENT MOTIVATIONAL ORIENTATIONS ON THE SCHOOL LEARNING OF STUDENTS WITH LEARNING DISABILITIES, AS W E L L AS INTO THE VARIABLES AFFECTING INTRINSIC AND EXTRINSIC MOTIVATION. ALSO INCLUDED ARE RECOMMENDATIONS, BASED ON EMPIRICAL EVIDENCE, FOR ENHANCING ACADEMIC INTRINSIC MOTIVATION IN LEARNERS OF VARYING ABIL IT IES AT A L L GRADE LEVELS. I .NTEREST IN THE VARIOUS ASPECTS OF INTRINSIC and extrinsic motivation has accelerated in recent years. Motivational orientation is considered to be an important factor in determining the academic success of children with and without disabilities (Adelman & Taylor, 1986; Calder & Staw, 1975; Deci, 1975; Deci & Chandler, 1986; Schunk, 1991). Academic intrinsic motivation has been found to be significantly correlated with academic achievement in students with learning disabilities (Gottfried, 1985) and without learning disabilities (Adelman, 1978; Adelman & Taylor, 1983). However, children with learning disabilities (LD) are less likely than their nondisabled peers to be intrinsically motivated (Adelman & Chaney, 1982; Adelman & Taylor, 1986; Mastropieri & Scruggs, 1994; Smith, 1994). Students with LD have been found to have more positive attitudes toward school than toward school learning (Wilson & David, 1994). Wilson and David asked 89 students with LD to respond to items on the School Attitude Measures (SAM; Wick, 1990) and on the Children's Academic Intrinsic Motivation Inventory (CAIMI; Gottfried, 1986). The students with L D were found to have a more positive attitude toward the school environment than toward academic tasks. Research has also shown that students with LD may derive their self-perceptions from areas other than school, and do not see themselves as less competent in areas of school learning (Grolnick & Ryan, 1990). Although there is only a limited amount of research available on intrinsic motivation in the population with special needs (Adelman, 1978; Adelman & Taylor, 1986; Grolnick & Ryan, 1990), there is an abundance of research on the general school-age population. This article is an at tempt to use existing research to identify variables pertinent to the academic intrinsic motivation of children with learning disabilities. The first part of the article deals with the definitions of intrinsic and extrinsic motivation. The next part identifies some of the factors affecting the motivational orientation and subsequent academic achievement of school-age children. This is followed by empirical evidence of the effects of rewards on intrinsic motivation, and suggestions on enhancing intrinsic motivation in the learner. At the end, several strategies are presented that could be used by the teacher to develop and encourage intrinsic motivation in children with and without LD. l O R E M E D I A L A N D S P E C I A L E D U C A T I O N Volume 18. Number 1, January/February 1997, Pages 12-19 D E F I N I N G M O T I V A T I O N A L A T T R I B U T E S Intrinsic Motivation Intrinsic motivation has been defined as (a) participation in an activity purely out of curiosity, that is, from a need to know more about something (Deci, 1975; Gottfried, 1983; Woolfolk, 1990); (b) the desire to engage in an activity purely for the sake of participating in and completing a task (Bates, 1979; Deci, Vallerand, Pelletier, & Ryan, 1991); and (c) the desire to contribute (Mills, 1991). Academic intrinsic motivation has been measured by (a) the ability of the learner to persist with the task assigned (Brophy, 1983; Gottfried, 1983); (b) the amount of time spent by the student on tackling the task (Brophy, 1983; Gottfried, 1983); (c) the innate curiosity to learn (Gottfried, 1983); (d) the feeling of efficacy related to an activity (Gottfried, 1983; Schunk, 1991; Smith, 1994); (e) the desire to select an activity (Brophy, 1983); and (f) a combination of all these variables (Deci, 1975; Deci & Ryan, 1985). A student who is intrinsically motivated will persist with the assigned task, even though it may be difficult (Gottfried, 1983; Schunk, 1990), and will not need any type of reward or incentive to initiate or complete a task (Beck, 1978; Deci, 1975; Woolfolk, 1990). This type of student is more likely to complete the chosen task and be excited by the challenging nature of an activity. The intrinsically motivated student is also more likely to retain the concepts learned and to feel confident about tackling unfamiliar learning situations, like new vocabulary words. However, the amount of interest generated by the task also plays a role in the motivational orientation of the learner. An assigned task with zero interest value is less likely to motivate the student than is a task that arouses interest and curiosity. Intrinsic motivation is based in the innate, organismic needs for competence and self-determination (Deci & Ryan, 1985; Woolfolk, 1990), as well as the desire to seek and conquer challenges (Adelman & Taylor, 1990). People are likely to be motivated to complete a task on the basis of their level of interest and the nature of the challenge. Research has suggested that children with higher academic intrinsic motivation function more effectively in school (Adelman & Taylor, 1990; Boggiano & Barrett, 1992; Gottfried, 1990; Soto, 1988). Besides innate factors, there are several other variables that can affect intrinsic motivation. Extrinsic Motivation Adults often give the learner an incentive to participate in or to complete an activity. The incentive might be in the form of a tangible reward, such as money or candy. Or, it might be the likelihood of a reward in the future, such as a good grade. Or, it might be a nontangible reward, for example, verbal praise or a pat on the back. The incentive might also be exemption from a less liked activity or avoidance of punishment. These incentives are extrinsic motivators. A person is said to be extrinsically motivated when she or he undertakes a task purely for the sake of attaining a reward or for avoiding some punishment (Adelman & Taylor, 1990; Ball, 1984; Beck, 1978; Deci, 1975; Wiersma, 1992; Woolfolk, 1990). Extrinsic motivation can, especially in learning and other forms of creative work, interfere with intrinsic motivation (Benninga et al., 1991; Butler, 1989; Deci, 1975; McCullers, Fabes, & Moran, 1987). In such cases, it might be better not to offer rewards for participating in or for completing an activity, be it textbook learning or an organized play activity. Not only teachers but also parents have been found to negatively influence the motivational orientation of the child by providing extrinsic consequences contingent upon their school performance (Gottfried, Fleming, & Gottfried, 1994). The relationship between rewards (and other extrinsic factors) and the intrinsic motivation of the learner is outlined in the following sections. MOTIVATION AND THE LEARNER In a classroom, the student is expected to tackle certain types of tasks, usually with very limited choices. Most of the research done on motivation has been done in settings where the learner had a wide choice of activities, or in a free-play setting. In reality, the student has to complete tasks that are compulsory as well as evaluated (Brophy, 1983). Children are expected to complete a certain number of assignments that meet specified criteria. For example, a child may be asked to complete five multiplication problems and is expected to get correct answers to at least three. Teachers need to consider how instructional practices are designed from the motivational perspective (Schunk, 1990). Development of skills required for academic achievement can be influenced by instructional design. If the design undermines student ability and skill level, it can reduce motivation (Brophy, 1983; Schunk, 1990). This is especially applicable to students with disabilities. Students with LD have shown a significant increase in academic learning after engaging in interesting tasks like computer games designed to enhance learning (Adelman, Lauber, Nelson, & Smith, 1989). A common aim of educators is to help all students enhance their learning, regardless of the student's ability level. To achieve this outcome, the teacher has to develop a curriculum geared to the individual needs and ability levels of the students, especially the students with special needs. If the assigned task is within the child's ability level as well as inherently interesting, the child is very likely to be intrinsically motivated to tackle the task. The task should also be challenging enough to stimulate the child's desire to attain mastery. The probability of success or failure is often attributed to factors such as ability, effort, difficulty level of the task, R E M E D I A L A N D S P E C I A L E D U C A T I O N 1 O Volume 18, Number 1, January/February 1997 and luck (Schunk, 1990). One or more of these attributes might, in turn, affect the motivational orientation of a student. The student who is sure of some level of success is more likely to be motivated to tackle the task than one who is unsure of the outcome (Adelman & Taylor, 1990). A student who is motivated to learn will find school-related tasks meaningful (Brophy, 1983, 1987). Teachers can help students to maximize their achievement by adjusting the instructional design to their individual characteristics and motivational orientation. The personality traits and motivational tendency of learners with mild handicaps can either help them to compensate for their inadequate learning abilities and enhance performanc", "title": "" }, { "docid": "f683ae3ae16041977f0d6644213de112", "text": "Keywords: Wind turbine Fault prognosis Fault detection Pitch system ANFIS Neuro-fuzzy A-priori knowledge a b s t r a c t The fast growing wind industry has shown a need for more sophisticated fault prognosis analysis in the critical and high value components of a wind turbine (WT). Current WT studies focus on improving their reliability and reducing the cost of energy, particularly when WTs are operated offshore. WT Supervisory Control and Data Acquisition (SCADA) systems contain alarms and signals that could provide an early indication of component fault and allow the operator to plan system repair prior to complete failure. Several research programmes have been made for that purpose; however, the resulting cost savings are limited because of the data complexity and relatively low number of failures that can be easily detected in early stages. A new fault prognosis procedure is proposed in this paper using a-priori knowledge-based Adaptive Neuro-Fuzzy Inference System (ANFIS). This has the aim to achieve automated detection of significant pitch faults, which are known to be significant failure modes. With the advantage of a-priori knowledge incorporation, the proposed system has improved ability to interpret the previously unseen conditions and thus fault diagnoses are improved. In order to construct the proposed system, the data of the 6 known WT pitch faults were used to train the system with a-priori knowledge incorporated. The effectiveness of the approach was demonstrated using three metrics: (1) the trained system was tested in a new wind farm containing 26 WTs to show its prognosis ability; (2) the first test result was compared to a general alarm approach; (3) a Confusion Matrix analysis was made to demonstrate the accuracy of the proposed approach. The result of this research has demonstrated that the proposed a-priori knowledge-based ANFIS (APK-ANFIS) approach has strong potential for WT pitch fault prognosis. Wind is currently the fastest growing renewable energy source for electrical generation around the world. It is expected that a large number of wind turbines (WTs), especially offshore, will be employed in the near future (EWEA, 2011; Krohn, Morthorst, & Awerbuch, 2009). Following a rapid acceleration of wind energy development in the early 21st century, WT manufacturers are beginning to focus on improving their cost of energy. WT operational performance is critical to the cost of energy. This is because Operation and Maintenance (O&M) costs constitute a significant share of the annual cost of a wind …", "title": "" }, { "docid": "4406b7c9d53b895355fa82b11da21293", "text": "In today's scenario, World Wide Web (WWW) is flooded with huge amount of information. Due to growing popularity of the internet, finding the meaningful information among billions of information resources on the WWW is a challenging task. The information retrieval (IR) provides documents to the end users which satisfy their need of information. Search engine is used to extract valuable information from the internet. Web crawler is the principal part of search engine; it is an automatic script or program which can browse the WWW in automatic manner. This process is known as web crawling. In this paper, review on strategies of information retrieval in web crawling has been presented that are classifying into four categories viz: focused, distributed, incremental and hidden web crawlers. Finally, on the basis of user customized parameters the comparative analysis of various IR strategies has been performed.", "title": "" }, { "docid": "55370f9487be43f2fbd320c903005185", "text": "Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statisticsbased methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever “copy-paste” procedure, which stitches together large regions of the sample. Hybrid methods try to combines ideas from both approaches to avoid their hurdles. Current methods, including the recent CNN approaches, are able to produce impressive synthesis on various kinds of textures. Nevertheless, most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly.", "title": "" }, { "docid": "89703b730ff63548530bdb9e2ce59c6b", "text": "How to develop creative digital products which really meet the prosumer's needs while promoting a positive user experience? That question has guided this work looking for answers through different disciplinary fields. Born on 2002 as an Engineering PhD dissertation, since 2003 the method has been improved by teaching it to Communication and Design graduate and undergraduate courses. It also guided some successful interdisciplinary projects. Its main focus is on developing a creative conceptual model that might meet a human need within its context. The resulting method seeks: (1) solutions for the main problems detected in the previous versions; (2) significant ways to represent Design practices; (3) a set of activities that could be developed by people without programming knowledge. The method and its research current state are presented in this work.", "title": "" }, { "docid": "c804aa80440827033fa787723d23c698", "text": "The present paper analyzes the self-generated explanations (from talk-aloud protocols) that “Good” ond “Poor” students produce while studying worked-out exomples of mechanics problems, and their subsequent reliance on examples during problem solving. We find that “Good” students learn with understanding: They generate many explanations which refine and expand the conditions for the action ports of the exomple solutions, ond relate these actions to principles in the text. These self-explanations are guided by accurate monitoring of their own understanding and misunderstanding. Such learning results in example-independent knowledge and in a better understanding of the principles presented in the text. “Poor” students do not generate sufficient self-explonations, monitor their learning inaccurately, and subsequently rely heovily an examples. We then discuss the role of self-explanations in facilitating problem solving, as well OS the adequacy of current Al models of explanation-based learning to account for these psychological findings.", "title": "" }, { "docid": "cf020ec1d5fbaa42d4699b16d27434d0", "text": "Direct methods for restoration of images blurred by motion are analyzed and compared. The term direct means that the considered methods are performed in a one-step fashion without any iterative technique. The blurring point-spread function is assumed to be unknown, and therefore the image restoration process is called blind deconvolution. What is believed to be a new direct method, here called the whitening method, was recently developed. This method and other existing direct methods such as the homomorphic and the cepstral techniques are studied and compared for a variety of motion types. Various criteria such as quality of restoration, sensitivity to noise, and computation requirements are considered. It appears that the recently developed method shows some improvements over other older methods. The research presented here clarifies the differences among the direct methods and offers an experimental basis for choosing which blind deconvolution method to use. In addition, some improvements on the methods are suggested.", "title": "" }, { "docid": "b4714cacd13600659e8a94c2b8271697", "text": "AIM AND OBJECTIVE\nExamine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK.\n\n\nBACKGROUND\nThe non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally.\n\n\nDESIGN\nA comprehensive literature review.\n\n\nMETHOD\nLiterature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined.\n\n\nRESULTS\nThe literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nQuestions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.", "title": "" }, { "docid": "0bc7de3f7ac06aa080ec590bdaf4c3b3", "text": "This paper demonstrates that US prestige-press coverage of global warming from 1988 to 2002 has contributed to a significant divergence of popular discourse from scientific discourse. This failed discursive translation results from an accumulation of tactical media responses and practices guided by widely accepted journalistic norms. Through content analysis of US prestige press— meaning the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal—this paper focuses on the norm of balanced reporting, and shows that the prestige press’s adherence to balance actually leads to biased coverage of both anthropogenic contributions to global warming and resultant action. r 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "03277ef81159827a097c73cd24f8b5c0", "text": "It is generally accepted that there is something special about reasoning by using mental images. The question of how it is special, however, has never been satisfactorily spelled out, despite more than thirty years of research in the post-behaviorist tradition. This article considers some of the general motivation for the assumption that entertaining mental images involves inspecting a picture-like object. It sets out a distinction between phenomena attributable to the nature of mind to what is called the cognitive architecture, and ones that are attributable to tacit knowledge used to simulate what would happen in a visual situation. With this distinction in mind, the paper then considers in detail the widely held assumption that in some important sense images are spatially displayed or are depictive, and that examining images uses the same mechanisms that are deployed in visual perception. I argue that the assumption of the spatial or depictive nature of images is only explanatory if taken literally, as a claim about how images are physically instantiated in the brain, and that the literal view fails for a number of empirical reasons--for example, because of the cognitive penetrability of the phenomena cited in its favor. Similarly, while it is arguably the case that imagery and vision involve some of the same mechanisms, this tells us very little about the nature of mental imagery and does not support claims about the pictorial nature of mental images. Finally, I consider whether recent neuroscience evidence clarifies the debate over the nature of mental images. I claim that when such questions as whether images are depictive or spatial are formulated more clearly, the evidence does not provide support for the picture-theory over a symbol-structure theory of mental imagery. Even if all the empirical claims were true, they do not warrant the conclusion that many people have drawn from them: that mental images are depictive or are displayed in some (possibly cortical) space. Such a conclusion is incompatible with what is known about how images function in thought. We are then left with the provisional counterintuitive conclusion that the available evidence does not support rejection of what I call the \"null hypothesis\"; namely, that reasoning with mental images involves the same form of representation and the same processes as that of reasoning in general, except that the content or subject matter of thoughts experienced as images includes information about how things would look.", "title": "" }, { "docid": "7d301fc945abe95cef82cb56e98e6cfe", "text": "Many modern applications are a mixture of streaming, transactional and analytical workloads. However, traditional data platforms are each designed for supporting a specific type of workload. The lack of a single platform to support all these workloads has forced users to combine disparate products in custom ways. The common practice of stitching heterogeneous environments has caused enormous production woes by increasing complexity and the total cost of ownership. To support this class of applications, we present SnappyData as the first unified engine capable of delivering analytics, transactions, and stream processing in a single integrated cluster. We build this hybrid engine by carefully marrying a big data computational engine (Apache Spark) with a scale-out transactional store (Apache GemFire). We study and address the challenges involved in building such a hybrid distributed system with two conflicting components designed on drastically different philosophies: one being a lineage-based computational model designed for high-throughput analytics, the other a consensusand replication-based model designed for low-latency operations.", "title": "" } ]
scidocsrr
cd4fef6db7a2a054c813b3bf27d67f64
Scalable high-performance architecture for convolutional ternary neural networks on FPGA
[ { "docid": "b7d13c090e6d61272f45b1e3090f0341", "text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "title": "" }, { "docid": "87daab52e390eaeff7da0ad7dafe728a", "text": "The computation and storage requirements for Deep Neural Networks (DNNs) are usually high. This issue limits their deployability on ubiquitous computing devices such as smart phones, wearables and autonomous drones. In this paper, we propose ternary neural networks (TNNs) in order to make deep learning more resource-efficient. We train these TNNs using a teacher-student approach based on a novel, layer-wise greedy methodology. Thanks to our two-stage training procedure, the teacher network is still able to use state-of-the-art methods such as dropout and batch normalization to increase accuracy and reduce training time. Using only ternary weights and activations, the student ternary network learns to mimic the behavior of its teacher network without using any multiplication. Unlike its {-1,1} binary counterparts, a ternary neural network inherently prunes the smaller weights by setting them to zero during training. This makes them sparser and thus more energy-efficient. We design a purpose-built hardware architecture for TNNs and implement it on FPGA and ASIC. We evaluate TNNs on several benchmark datasets and demonstrate up to 3.1 χ better energy efficiency with respect to the state of the art while also improving accuracy.", "title": "" } ]
[ { "docid": "b280d6115add9407a08de94d34fe47d2", "text": "Terabytes of data are generated day-to-day from modern information systems, cloud computing and digital technologies, as the increasing number of Internet connected devices grows. However, the analysis of these massive data requires many efforts at multiple levels for knowledge extraction and decision making. Therefore, Big Data Analytics is a current area of research and development that has become increasingly important. This article investigates cutting-edge research efforts aimed at analyzing Internet of Things (IoT) data. The basic objective of this article is to explore the potential impact of large data challenges, research efforts directed towards the analysis of IoT data and various tools associated with its analysis. As a result, this article suggests the use of platforms to explore big data in numerous stages and better understand the knowledge we can draw from the data, which opens a new horizon for researchers to develop solutions based on open research challenges and topics.", "title": "" }, { "docid": "eb31a7242c682b3683ce9659ce32b7c9", "text": "Code smells are symptoms of poor design and implementation choices that may hinder code comprehension, and possibly increase changeand fault-proneness. While most of the detection techniques just rely on structural information, many code smells are intrinsically characterized by how code elements change overtime. In this paper, we propose Historical Information for Smell deTection (HIST), an approach exploiting change history information to detect instances of five different code smells, namely Divergent Change, Shotgun Surgery, Parallel Inheritance, Blob, and Feature Envy. We evaluate HIST in two empirical studies. The first, conducted on 20 open source projects, aimed at assessing the accuracy of HIST in detecting instances of the code smells mentioned above. The results indicate that the precision of HIST ranges between 72 and 86 percent, and its recall ranges between 58 and 100 percent. Also, results of the first study indicate that HIST is able to identify code smells that cannot be identified by competitive approaches solely based on code analysis of a single system's snapshot. Then, we conducted a second study aimed at investigating to what extent the code smells detected by HIST (and by competitive code analysis techniques) reflect developers' perception of poor design and implementation choices. We involved 12 developers of four open source projects that recognized more than 75 percent of the code smell instances identified by HIST as actual design/implementation problems.", "title": "" }, { "docid": "c42edb326ec95c257b821cc617e174e6", "text": "recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios from business process modelling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions. Iman Avazpour Faculty of ICT, Centre for Computing and Engineering Software and Systems (SUCCESS), Swinburne University of Technology, Hawthorn, Victoria 3122, Australia e-mail: iavazpour@swin.", "title": "" }, { "docid": "31328c32656d25d00d45a714df0f6d94", "text": "In a heterogeneous cellular network (HetNet) consisting of $M$ tiers of densely-deployed base stations (BSs), consider that each of the BSs in the HetNet that are associated with multiple users is able to simultaneously schedule and serve two users in a downlink time slot by performing the (power-domain) non-orthogonal multiple access (NOMA) scheme. This paper aims at the preliminary study on the downlink coverage performance of the HetNet with the non-cooperative and the proposed cooperative NOMA schemes. First, we study the coverage probability of the NOMA users for the non-cooperative NOMA scheme in which no BSs are coordinated to jointly transmit the NOMA signals for a particular cell and the coverage probabilities of the two NOMA users of the BSs in each tier are derived. We show that the coverage probabilities can be largely reduced if allocated transmit powers for the NOMA users are not satisfied with some constraints. Next, we study and derive the coverage probabilities for the proposed cooperative NOMA scheme in which the void BSs that are not tagged by any users are coordinated to enhance the far NOMA user in a particular cell. Our analyses show that cooperative NOMA can significantly improve the coverage of all NOMA users as long as the transmit powers for the NOMA users are properly allocated.", "title": "" }, { "docid": "255a707951238ace366ef1ea0df833fc", "text": "During the last decade, researchers have verified that clothing can provide information for gender recognition. However, before extracting features, it is necessary to segment the clothing region. We introduce a new clothes segmentation method based on the application of the GrabCut technique over a trixel mesh, obtaining very promising results for a close to real time system. Finally, the clothing features are combined with facial and head context information to outperform previous results in gender recognition with a public database.", "title": "" }, { "docid": "add30dc8d14a26eba48dbe5baaaf4169", "text": "The authors investigated whether intensive musical experience leads to enhancements in executive processing, as has been shown for bilingualism. Young adults who were bilinguals, musical performers (instrumentalists or vocalists), or neither completed 3 cognitive measures and 2 executive function tasks based on conflict. Both executive function tasks included control conditions that assessed performance in the absence of conflict. All participants performed equivalently for the cognitive measures and the control conditions of the executive function tasks, but performance diverged in the conflict conditions. In a version of the Simon task involving spatial conflict between a target cue and its position, bilinguals and musicians outperformed monolinguals, replicating earlier research with bilinguals. In a version of the Stroop task involving auditory and linguistic conflict between a word and its pitch, the musicians performed better than the other participants. Instrumentalists and vocalists did not differ on any measure. Results demonstrate that extended musical experience enhances executive control on a nonverbal spatial task, as previously shown for bilingualism, but also enhances control in a more specialized auditory task, although the effect of bilingualism did not extend to that domain.", "title": "" }, { "docid": "8b675cc47b825268837a7a2b5a298dc9", "text": "Artificial Intelligence chatbot is a technology that makes interaction between man and machine possible by using natural language. In this paper, we proposed an architectural design of a chatbot that will function as virtual diabetes physician/doctor. This chatbot will allow diabetic patients to have a diabetes control/management advice without the need to go to the hospital. A general history of a chatbot, a brief description of each chatbots is discussed. We proposed the design of a new technique that will be implemented in this chatbot as the key component to function as diabetes physician. Using this design, chatbot will remember the conversation path through parameter called Vpath. Vpath will allow chatbot to gives a response that is mostly suitable for the whole conversation as it specifically designed to be a virtual diabetes physician.", "title": "" }, { "docid": "9958d07645e35ec725dbcf4e11ffc0b1", "text": "A bed exiting monitoring system with fall detection function for the elderly living alone is proposed in this paper. By separating the process of exiting or getting on the bed into several significant movements, the sensor system composed of infrared and pressure sensors attached to the bed will correspondingly respond to these movements. Using the finite state machine (FSM) method, the bed exiting state and fall events can be detected according to specific transitions recognized by the sensor system. Experiments with plausible assessment are conducted to find the optimal sensor combination solution and to verify the FSM algorithm, which is demonstrated feasible and effective in practical use.", "title": "" }, { "docid": "7793601ae7788b8a7b1082f3757cf1ab", "text": "In this paper we present a reference data set that we are making publicly available to the indoor navigation community [8]. This reference data is intended for the analysis and verification of algorithms based on foot mounted inertial sensors. Furthermore, we describe our data collection methodology that is applicable to the analysis of a broad range of indoor navigation approaches. We employ a high precision optical reference system that is traditionally being used in the film industry for human motion capturing and in applications such as analysis of human motion in sports and medical rehabilitation. The data set provides measurements from a six degrees of freedom foot mounted inertial MEMS sensor array, as well as synchronous high resolution data from the optical tracking system providing ground truth for location and orientation. We show the use of this reference data set by comparing the performance of algorithms for an essential part of pedestrian dead reckoning systems for positioning, namely identification of the rest phase during the human gait cycle.", "title": "" }, { "docid": "9824b33621ad02c901a9e16895d2b1a6", "text": "Objective This systematic review aims to summarize current evidence on which naturally present cannabinoids contribute to cannabis psychoactivity, considering their reported concentrations and pharmacodynamics in humans. Design Following PRISMA guidelines, papers published before March 2016 in Medline, Scopus-Elsevier, Scopus, ISI-Web of Knowledge and COCHRANE, and fulfilling established a-priori selection criteria have been included. Results In 40 original papers, three naturally present cannabinoids (∆-9-Tetrahydrocannabinol, ∆-8-Tetrahydrocannabinol and Cannabinol) and one human metabolite (11-OH-THC) had clinical relevance. Of these, the metabolite produces the greatest psychoactive effects. Cannabidiol (CBD) is not psychoactive but plays a modulating role on cannabis psychoactive effects. The proportion of 9-THC in plant material is higher (up to 40%) than in other cannabinoids (up to 9%). Pharmacodynamic reports vary due to differences in methodological aspects (doses, administration route and volunteers' previous experience with cannabis). Conclusions Findings reveal that 9-THC contributes the most to cannabis psychoactivity. Due to lower psychoactive potency and smaller proportions in plant material, other psychoactive cannabinoids have a weak influence on cannabis final effects. Current lack of standard methodology hinders homogenized research on cannabis health effects. Working on a standard cannabis unit considering 9-THC is recommended.", "title": "" }, { "docid": "aa8ae1fc471c46b5803bfa1303cb7001", "text": "It is widely recognized that steganography with sideinformation in the form of a precover at the sender enjoys significantly higher empirical security than other embedding schemes. Despite the success of side-informed steganography, current designs are purely heuristic and little has been done to develop the embedding rule from first principles. Building upon the recently proposed MiPOD steganography, in this paper we impose multivariate Gaussian model on acquisition noise and estimate its parameters from the available precover. The embedding is then designed to minimize the KL divergence between cover and stego distributions. In contrast to existing heuristic algorithms that modulate the embedding costs by 1–2|e|, where e is the rounding error, in our model-based approach the sender should modulate the steganographic Fisher information, which is a loose equivalent of embedding costs, by (1–2|e|)^2. Experiments with uncompressed and JPEG images show promise of this theoretically well-founded approach. Introduction Steganography is a privacy tool in which messages are embedded in inconspicuous cover objects to hide the very presence of the communicated secret. Digital media, such as images, video, and audio are particularly suitable cover sources because of their ubiquity and the fact that they contain random components, the acquisition noise. On the other hand, digital media files are extremely complex objects that are notoriously hard to describe with sufficiently accurate and estimable statistical models. This is the main reason for why current steganography in such empirical sources [3] lacks perfect security and heavily relies on heuristics, such as embedding “costs” and intuitive modulation factors. Similarly, practical steganalysis resorts to increasingly more complex high-dimensional descriptors (rich models) and advanced machine learning paradigms, including ensemble classifiers and deep learning. Often, a digital media object is subjected to processing and/or format conversion prior to embedding the secret. The last step in the processing pipeline is typically quantization. In side-informed steganography with precover [21], the sender makes use of the unquantized cover values during embedding to hide data in a more secure manner. The first embedding scheme of this type described in the literature is the embedding-while-dithering [14] in which the secret message was embedded by perturbing the process of color quantization and dithering when converting a true-color image to a palette format. Perturbed quantization [15] started another direction in which rounding errors of DCT coefficients during JPEG compression were used to modify the embedding algorithm. This method has been advanced through a series of papers [23, 24, 29, 20], culminating with approaches based on advanced coding techniques with a high level of empirical security [19, 18, 6]. Side-information can have many other forms. Instead of one precover, the sender may have access to the acquisition oracle (a camera) and take multiple images of the same scene. These multiple exposures can be used to estimate the acquisition noise and also incorporated during embedding. This research direction has been developed to a lesser degree compared to steganography with precover most likely due to the difficulty of acquiring the required imagery and modeling the differences between acquisitions. In a series of papers [10, 12, 11], Franz et al. proposed a method in which multiple scans of the same printed image on a flat-bed scanner were used to estimate the model of the acquisition noise at every pixel. This requires acquiring a potentially large number of scans, which makes this approach rather labor intensive. Moreover, differences in the movement of the scanner head between individual scans lead to slight spatial misalignment that complicates using this type of side-information properly. Recently, the authors of [7] showed how multiple JPEG images of the same scene can be used to infer the preferred direction of embedding changes. By working with quantized DCT coefficients instead of pixels, the embedding is less sensitive to small differences between multiple acquisitions. Despite the success of side-informed schemes, there appears to be an alarming lack of theoretical analysis that would either justify the heuristics or suggest a well-founded (and hopefully more powerful) approach. In [13], the author has shown that the precover compensates for the lack of the cover model. In particular, for a Gaussian model of acquisition noise, precover-informed rounding is more secure than embedding designed to preserve the cover model estimated from the precover image assuming the cover is “sufficiently non-stationary.” Another direction worth mentioning in this context is the bottom-up model-based approach recently proposed by Bas [2]. The author showed that a high-capacity steganographic scheme with a rather low empirical detectability can be constructed when the process of digitally developing a RAW sensor capture is sufficiently simplified. The impact of embedding is masked as an increased level of photonic noise, e.g., due to a higher ISO setting. It will likely be rather difficult, however, to extend this approach to realistic processing pipelines. Inspired by the success of the multivariate Gaussian model in steganography for digital images [25, 17, 26], in this paper we adopt the same model for the precover and then derive the embedding rule to minimize the KL divergence between cover and stego distributions. The sideinformation is used to estimate the parameters of the acquisition noise and the noise-free scene. In the next section, we review current state of the art in heuristic side-informed steganography with precover. In the following section, we introduce a formal model of image acquisition. In Section “Side-informed steganography with MVG acquisition noise”, we describe the proposed model-based embedding method, which is related to heuristic approaches in Section “Connection to heuristic schemes.” The main bulk of results from experiments on images represented in the spatial and JPEG domain appear in Section “Experiments.” In the subsequent section, we investigate whether the public part of the selection channel, the content adaptivity, can be incorporated in selection-channel-aware variants of steganalysis features to improve detection of side-informed schemes. The paper is then closed with Conclusions. The following notation is adopted for technical arguments. Matrices and vectors will be typeset in boldface, while capital letters are reserved for random variables with the corresponding lower case symbols used for their realizations. In this paper, we only work with grayscale cover images. Precover values will be denoted with xij ∈ R, while cover and stego values will be integer arrays cij and sij , 1 ≤ i ≤ n1, 1 ≤ j ≤ n2, respectively. The symbols [x], dxe, and bxc are used for rounding and rounding up and down the value of x. By N (μ,σ2), we understand Gaussian distribution with mean μ and variance σ2. The complementary cumulative distribution function of a standard normal variable (the tail probability) will be denoted Q(x) = ∫∞ x (2π)−1/2 exp ( −z2/2 ) dz. Finally, we say that f(x)≈ g(x) when limx→∞ f(x)/g(x) = 1. Prior art in side-informed steganography with precover All modern steganographic schemes, including those that use side-information, are implemented within the paradigm of distortion minimization. First, each cover element cij is assigned a “cost” ρij that measures the impact on detectability should that element be modified during embedding. The payload is then embedded while minimizing the sum of costs of all changed cover elements, ∑ cij 6=sij ρij . A steganographic scheme that embeds with the minimal expected cost changes each cover element with probability βij = exp(−λρij) 1 +exp(−λρij) , (1) if the embedding operation is constrained to be binary, and βij = exp(−λρij) 1 +2exp(−λρij) , (2) for a ternary scheme with equal costs of changing cij to cij ± 1. Syndrome-trellis codes [8] can be used to build practical embedding schemes that operate near the rate–distortion bound. For steganography designed to minimize costs (embedding distortion), a popular heuristic to incorporate a precover value xij during embedding is to modulate the costs based on the rounding error eij = cij − xij , −1/2≤ eij ≤ 1/2 [23, 29, 20, 18, 19, 6, 24]. A binary embedding scheme modulates the cost of changing cij = [xij ] to [xij ] + sign(eij) by 1−2|eij |, while prohibiting the change to [xij ]− sign(eij): ρij(sign(eij)) = (1−2|eij |)ρij (3) ρij(−sign(eij)) = Ω, (4) where ρij(u) is the cost of modifying the cover value by u∈ {−1,1}, ρij are costs of some additive embedding scheme, and Ω is a large constant. This modulation can be justified heuristically because when |eij | ≈ 1/2, a small perturbation of xij could cause cij to be rounded to the other side. Such coefficients are thus assigned a proportionally smaller cost because 1− 2|eij | ≈ 0. On the other hand, the costs are unchanged when eij ≈ 0, as it takes a larger perturbation of the precover to change the rounded value. A ternary version of this embedding strategy [6] allows modifications both ways with costs: ρij(sign(eij)) = (1−2|eij |)ρij (5) ρij(−sign(eij)) = ρij . (6) Some embedding schemes do not use costs and, instead, minimize statistical detectability. In MiPOD [25], the embedding probabilities βij are derived from their impact on the cover multivariate Gaussian model by solving the following equation for each pixel ij: βijIij = λ ln 1−2βij βij , (7) where Iij = 2/σ̂4 ij is the Fisher information with σ̂ 2 ij an estimated variance of the acquisition noise at pixel ij, and λ is a Lagrange multiplier determined by the payload size. To incorporate the side-information, the sender first converts the embedding probabilities into costs and then modulates them as in (3) or (5). This can be done b", "title": "" }, { "docid": "bae2f948eca1dc88cbcd5cb2e6165d3b", "text": "Important attributes of 3D brain cortex segmentation algorithms include robustness, accuracy, computational efficiency, and facilitation of user interaction, yet few algorithms incorporate all of these traits. Manual segmentation is highly accurate but tedious and laborious. Most automatic techniques, while less demanding on the user, are much less accurate. It would be useful to employ a fast automatic segmentation procedure to do most of the work but still allow an expert user to interactively guide the segmentation to ensure an accurate final result. We propose a novel 3D brain cortex segmentation procedure utilizing dual-front active contours which minimize image-based energies in a manner that yields flexibly global minimizers based on active regions. Region-based information and boundary-based information may be combined flexibly in the evolution potentials for accurate segmentation results. The resulting scheme is not only more robust but much faster and allows the user to guide the final segmentation through simple mouse clicks which add extra seed points. Due to the flexibly global nature of the dual-front evolution model, single mouse clicks yield corrections to the segmentation that extend far beyond their initial locations, thus minimizing the user effort. Results on 15 simulated and 20 real 3D brain images demonstrate the robustness, accuracy, and speed of our scheme compared with other methods.", "title": "" }, { "docid": "8010361144a7bd9fc336aba88f6e8683", "text": "Moving garments and other cloth objects exhibit dynamic, complex wrinkles. Generating such wrinkles in a virtual environment currently requires either a time-consuming manual design process, or a computationally expensive simulation, often combined with accurate parameter-tuning requiring specialized animator skills. Our work presents an alternative approach for wrinkle generation which combines coarse cloth animation with a post-processing step for efficient generation of realistic-looking fine dynamic wrinkles. Our method uses the stretch tensor of the coarse animation output as a guide for wrinkle placement. To ensure temporal coherence, the placement mechanism uses a space-time approach allowing not only for smooth wrinkle appearance and disappearance, but also for wrinkle motion, splitting, and merging over time. Our method generates believable wrinkle geometry using specialized curve-based implicit deformers. The method is fully automatic and has a single user control parameter that enables the user to mimic different fabrics.", "title": "" }, { "docid": "f05f4c731c6ae024026dbde007bf5b38", "text": "While the first two functions are essential to switching power supplies, the latter has universal applications. Mixed-signal circuits, for instance, typically incur clock-synchronized load-current events that are faster than any active power supply circuit can supply, and do so while only surviving small variations in voltage. The result of these transient current excursions is noisy voltages, be they supply lines or data links. Capacitors are used to mitigate these effects, to supply and/or shunt the transient currents the power supply circuit is not quick enough to deliver, which is why a typical high performance system is sprinkled with many nanoand micro-Farad capacitors.", "title": "" }, { "docid": "1a6ec9678c5ee8aa0861e6c606c22330", "text": "Today millions of web-users express their opinions about many topics through blogs, wikis, fora, chats and social networks. For sectors such as e-commerce and e-tourism, it is very useful to automatically analyze the huge amount of social information available on the Web, but the extremely unstructured nature of these contents makes it a difficult task. SenticNet is a publicly available resource for opinion mining built exploiting AI and Semantic Web techniques. It uses dimensionality reduction to infer the polarity of common sense concepts and hence provide a public resource for mining opinions from natural language text at a semantic, rather than just syntactic, level.", "title": "" }, { "docid": "0da299fb53db5980a10e0ae8699d2209", "text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.", "title": "" }, { "docid": "9c8583dd46ef6ca49d7a9298377b755a", "text": "Traditional radio planning tools present a steep learning curve. We present BotRf, a Telegram Bot that facilitates the process by guiding non-experts in assessing the feasibility of radio links. Built on open source tools, BotRf can run on any smartphone or PC running Telegram. Using it on a smartphone has the added value that the Bot can leverage the internal GPS to enter coordinates. BotRf can be used in environments with low bandwidth as the generated data traffic is quite limited. We present examples of its use in Venezuela.", "title": "" }, { "docid": "4a65fcbc395eab512d8a7afe33c0f5ae", "text": "In eukaryotes, the spindle-assembly checkpoint (SAC) is a ubiquitous safety device that ensures the fidelity of chromosome segregation in mitosis. The SAC prevents chromosome mis-segregation and aneuploidy, and its dysfunction is implicated in tumorigenesis. Recent molecular analyses have begun to shed light on the complex interaction of the checkpoint proteins with kinetochores — structures that mediate the binding of spindle microtubules to chromosomes in mitosis. These studies are finally starting to reveal the mechanisms of checkpoint activation and silencing during mitotic progression.", "title": "" }, { "docid": "ce020748bd9bc7529036aa41dcd59a92", "text": "In this paper a new isolated SEPIC converter which is a proper choice for PV applications, is introduced and analyzed. The proposed converter has the advantage of high voltage gain while the switch voltage stress is same as a regular SEPIC converter. The converter operating modes are discussed and design considerations are presented. Also simulation results are illustrated which justifies the theoretical analysis. Finally the proposed converter is improved using active clamp technique.", "title": "" } ]
scidocsrr
e26691763ff4bc685f34d288d09a8332
Light it up: using paper circuitry to enhance low-fidelity paper prototypes for children
[ { "docid": "f641e0da7b9aaffe0fabd1a6b60a6c52", "text": "This paper introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of 'instant inkjet circuits' is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Having presented this exciting new technology, we explain the tools and techniques we have found useful for the first time. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built. We believe that this technology will be of immediate appeal to researchers in the ubiquitous computing domain, since it supports the fabrication of a variety of functional electronic device prototypes.", "title": "" }, { "docid": "7efc1612114cde04a70733ce9e851ba9", "text": "Low-fidelity paper prototyping has proven to be a useful technique for designing graphical user interfaces [1]. Wizard of Oz prototyping for other input modalities, such as speech, also has a long history [2]. Yet to surface are guidelines for low-fidelity prototyping of multimodal applications, those that use multiple and sometimes simultaneous combination of different input types. This paper describes our recent research in low fidelity, multimodal, paper prototyping and suggest guidelines to be used by future designers of multimodal applications.", "title": "" } ]
[ { "docid": "2a77d3750d35fd9fec52514739303812", "text": "We present a framework for analyzing and computing motion plans for a robot that operates in an environment that both varies over time and is not completely predictable. We rst classify sources of uncertainty in motion planning into four categories, and argue that the problems addressed in this paper belong to a fundamental category that has received little attention. We treat the changing environment in a exible manner by combining traditional connguration space concepts with a Markov process that models the environment. For this context, we then propose the use of a motion strategy, which provides a motion command for the robot for each contingency that it could be confronted with. We allow the speciication of a desired performance criterion, such as time or distance, and determine a motion strategy that is optimal with respect to that criterion. We demonstrate the breadth of our framework by applying it to a variety of motion planning problems. Examples are computed for problems that involve a changing conng-uration space, hazardous regions and shelters, and processing of random service requests. To achieve this, we have exploited the powerful principle of optimality, which leads to a dynamic programming-based algorithm for determining optimal strategies. In addition, we present several extensions to the basic framework that incorporate additional concerns, such as sensing issues or changes in the geometry of the robot.", "title": "" }, { "docid": "b0e81e112b9aa7ebf653243f00b21f23", "text": "Recent research indicates that toddlers and infants succeed at various non-verbal spontaneous-response false-belief tasks; here we asked whether toddlers would also succeed at verbal spontaneous-response false-belief tasks that imposed significant linguistic demands. We tested 2.5-year-olds using two novel tasks: a preferential-looking task in which children listened to a false-belief story while looking at a picture book (with matching and non-matching pictures), and a violation-of-expectation task in which children watched an adult 'Subject' answer (correctly or incorrectly) a standard false-belief question. Positive results were obtained with both tasks, despite their linguistic demands. These results (1) support the distinction between spontaneous- and elicited-response tasks by showing that toddlers succeed at verbal false-belief tasks that do not require them to answer direct questions about agents' false beliefs, (2) reinforce claims of robust continuity in early false-belief understanding as assessed by spontaneous-response tasks, and (3) provide researchers with new experimental tasks for exploring early false-belief understanding in neurotypical and autistic populations.", "title": "" }, { "docid": "cc5f1304bb7564ec990cf61ada5c1c0f", "text": "In the present study, the herbal preparation of Ophthacare brand eye drops was investigated for its anti-inflammatory, antioxidant and antimicrobial activity, using in vivo and in vitro experimental models. Ophthacare brand eye drops exhibited significant anti-inflammatory activity in turpentine liniment-induced ocular inflammation in rabbits. The preparation dose-dependently inhibited ferric chloride-induced lipid peroxidation in vitro and also showed significant antibacterial activity against Escherichia coli and Staphylococcus aureus and antifungal activity against Candida albicans. All these findings suggest that Ophthacare brand eye drops can be used in the treatment of various ophthalmic disorders.", "title": "" }, { "docid": "da17a995148ffcb4e219bb3f56f5ce4a", "text": "As education communities grow more interested in STEM (science, technology, engineering, and mathematics), schools have integrated more technology and engineering opportunities into their curricula. Makerspaces for all ages have emerged as a way to support STEM learning through creativity, community building, and hands-on learning. However, little research has evaluated the learning that happens in these spaces, especially in young children. One framework that has been used successfully as an evaluative tool in informal and technology-rich learning spaces is Positive Technological Development (PTD). PTD is an educational framework that describes positive behaviors children exhibit while engaging in digital learning experiences. In this exploratory case study, researchers observed children in a makerspace to determine whether the environment (the space and teachers) contributed to children’s Positive Technological Development. N = 20 children and teachers from a Kindergarten classroom were observed over 6 hours as they engaged in makerspace activities. The children’s activity, teacher’s facilitation, and the physical space were evaluated for alignment with the PTD framework. Results reveal that children showed high overall PTD engagement, and that teachers and the space supported children’s learning in complementary aspects of PTD. Recommendations for practitioners hoping to design and implement a young children’s makerspace are discussed.", "title": "" }, { "docid": "82708e65107a0877a052ce81294f535c", "text": "Abstract—Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and Critical Information Infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates Organisation Cyber Resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called CSuite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.", "title": "" }, { "docid": "641a51f9a5af9fc9dba4be3d12829fd5", "text": "In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net) for recognising scene texts. The overall architecture of our STAR-Net is illustrated in fig. 1. Our STARNet emphasises the importance of representative image-based feature extraction from text regions by the spatial attention mechanism and the residue learning strategy. It is by far the deepest neural network proposed for scene text recognition.", "title": "" }, { "docid": "625f1f11e627c570e26da9f41f89a28b", "text": "In this paper, we propose an approach to realize substrate integrated waveguide (SIW)-based leaky-wave antennas (LWAs) supporting continuous beam scanning from backward to forward above the cutoff frequency. First, through phase delay analysis, it was found that SIWs with straight transverse slots support backward and forward radiation of the -1-order mode with an open-stopband (OSB) in between. Subsequently, by introducing additional longitudinal slots as parallel components, the OSB can be suppressed, leading to continuous beam scanning at least from -40° through broadside to 35°. The proposed method only requires a planar structure and obtains less dispersive beam scanning compared with a composite right/left-handed (CRLH) LWA. Both simulations and measurements verify the intended beam scanning operation while verifying the underlying theory.", "title": "" }, { "docid": "837d1ef60937df15afc320b2408ad7b0", "text": "Zero-shot learning has tremendous application value in complex computer vision tasks, e.g. image classification, localization, image captioning, etc., for its capability of transferring knowledge from seen data to unseen data. Many recent proposed methods have shown that the formulation of a compatibility function and its generalization are crucial for the success of a zero-shot learning model. In this paper, we formulate a softmax-based compatibility function, and more importantly, propose a regularized empirical risk minimization objective to optimize the function parameter which leads to a better model generalization. In comparison to eight baseline models on four benchmark datasets, our model achieved the highest average ranking. Our model was effective even when the training set size was small and significantly outperforming an alternative state-of-the-art model in generalized zero-shot recognition tasks.", "title": "" }, { "docid": "714863ecaa627df1fee3301dde140995", "text": "Eye movement-based interaction offers the potential of easy, natural, and fast ways of interacting in virtual environments. However, there is little empirical evidence about the advantages or disadvantages of this approach. We developed a new interaction technique for eye movement interaction in a virtual environment and compared it to more conventional 3-D pointing. We conducted an experiment to compare performance of the two interaction types and to assess their impacts on spatial memory of subjects and to explore subjects' satisfaction with the two types of interactions. We found that the eye movement-based interaction was faster than pointing, especially for distant objects. However, subjects' ability to recall spatial information was weaker in the eye condition than the pointing one. Subjects reported equal satisfaction with both types of interactions, despite the technology limitations of current eye tracking equipment.", "title": "" }, { "docid": "7a54331811a4a93df69365b6756e1d5f", "text": "With object storage services becoming increasingly accepted as replacements for traditional file or block systems, it is important to effectively measure the performance of these services. Thus people can compare different solutions or tune their systems for better performance. However, little has been reported on this specific topic as yet. To address this problem, we present COSBench (Cloud Object Storage Benchmark), a benchmark tool that we are currently working on in Intel for cloud object storage services. In addition, in this paper, we also share the results of the experiments we have performed so far.", "title": "" }, { "docid": "2efb71ffb35bd05c7a124ffe8ad8e684", "text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.", "title": "" }, { "docid": "45c8f409a5783067b6dce332500d5a88", "text": "An online learning community enables learners to access up-to-date information via the Internet anytime–anywhere because of the ubiquity of the World Wide Web (WWW). Students can also interact with one another during the learning process. Hence, researchers want to determine whether such interaction produces learning synergy in an online learning community. In this paper, we take the Technology Acceptance Model as a foundation and extend the external variables as well as the Perceived Variables as our model and propose a number of hypotheses. A total of 436 Taiwanese senior high school students participated in this research, and the online learning community focused on learning English. The research results show that all the hypotheses are supported, which indicates that the extended variables can effectively predict whether users will adopt an online learning community. Finally, we discuss the implications of our findings for the future development of online English learning communities. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d798bc49068356495074f92b3bfe7a4b", "text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.", "title": "" }, { "docid": "5dcc5026f959b202240befbe56857ac4", "text": "When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates.", "title": "" }, { "docid": "bcb615f8bfe9b2b13a4bfe72b698e4c7", "text": "is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the one-and two-sample t-tests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small. The dictum \" more is better \" certainly applies to statistical inference. According to the law of large numbers, a larger sample size implies that confidence intervals are narrower and that more reliable conclusions can be reached. The reality is that researchers are usually far from the ideal \" mega-trial \" performed with 10,000 subjects (cf. Ioannidis, 2013) and will have to work with much smaller samples instead. For a variety of reasons, such as budget, time, or ethical constraints, it may not be possible to gather a large sample. In some fields of science, such as research on rare animal species, persons having a rare illness, or prodigies scoring at the extreme of an ability distribution (e.g., Ruthsatz & Urbach, 2012), …", "title": "" }, { "docid": "7f3bccab6d6043d3dedc464b195df084", "text": "This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.", "title": "" }, { "docid": "5b2bc42cf2a801dbed78b808fdba894b", "text": "In this paper, we report the development of a contactless position sensor with thin and planar structures for both sensor and target. The target is designed to be a compact resonator with resonance near the operating frequency, which improves the signal strength and increases the sensing range. The sensor is composed of a source coil and a pair of symmetrically arranged detecting coils. With differential measurement technique, highly accurate edge detection can be realized. Experiment results show that the sensor operates at varying gap size between the target and the sensor, even when the target is at 30 mm away, and the achieved accuracy is within 2% of the size of the sensing coil.", "title": "" }, { "docid": "9871a5673f042b0565c50295be188088", "text": "Formal security analysis has proven to be a useful tool for tracking modifications in communication protocols in an automated manner, where full security analysis of revisions requires minimum efforts. In this paper, we formally analysed prominent IoT protocols and uncovered many critical challenges in practical IoT settings. We address these challenges by using formal symbolic modelling of such protocols under various adversaries and security goals. Furthermore, this paper extends formal analysis to cryptographic Denial-of-Service (DoS) attacks and demonstrates that a vast majority of IoT protocols are vulnerable to such resource exhaustion attacks. We present a cryptographic DoS attack countermeasure that can be generally used in many IoT protocols. Our study of prominent IoT protocols such as CoAP and MQTT shows the benefits of our approach.", "title": "" }, { "docid": "36be150e997a1fb6b245e8c88688b1b8", "text": "Restricted Boltzmann Machines (RBMs) are generative models which can learn useful representations from samples of a dataset in an unsupervised fashion. They have been widely employed as an unsupervised pre-training method in machine learning. RBMs have been modified to model time series in two main ways: The Temporal RBM stacks a number of RBMs laterally and introduces temporal dependencies between the hidden layer units; The Conditional RBM, on the other hand, considers past samples of the dataset as a conditional bias and learns a representation which takes these into account. Here we propose a new training method for both the TRBM and the CRBM, which enforces the dynamic structure of temporal datasets. We do so by treating the temporal models as denoising autoencoders, considering past frames of the dataset as corrupted versions of the present frame and minimizing the reconstruction error of the present data by the model. We call this approach Temporal Autoencoding. This leads to a significant improvement in the performance of both models in a filling-in-frames task across a number of datasets. The error reduction for motion capture data is 56% for the CRBM and 80% for the TRBM. Taking the posterior mean prediction instead of single samples further improves the model’s estimates, decreasing the error by as much as 91% for the CRBM on motion capture data. We also trained the model to perform forecasting on a large number of datasets and have found TA pretraining to consistently improve the performance of the forecasts. Furthermore, by looking at the prediction error across time, we can see that this improvement reflects a better representation of the dynamics of the data as opposed to a bias towards reconstructing the observed data on a short time scale. We believe this novel approach of mixing contrastive divergence and autoencoder training yields better models of temporal data, bridging the way towards more robust generative models of time series.", "title": "" }, { "docid": "e4cfcd8bd577fc04480c62bbc6e94a41", "text": "Background and Objective: Binaural interaction component has been seen to be effective in assessing the binaural interaction process in normal hearing individuals. However, there is a lack of literature regarding the effects of SNHL on the Binaural Interaction Component of ABR. Hence, it is necessary to study binaural interaction occurs at the brainstem when there is an associated hearing impairment. Methods: Three groups of participants in the age range of 30 to 55 years were taken for study i.e. one control group and two experimental groups (symmetrical and asymmetrical hearing loss). The binaural interaction component was determined by subtracting the binaurally evoked auditory potentials from the sum of the monaural auditory evoked potentials: BIC= [{left monaural + right monaural)-binaural}. The latency and amplitude of V peak was estimated for click evoked ABR for monaural and binaural recordings. Results: One way ANOVA revealed a significant difference for binaural interaction component in terms of latency between different groups. One-way ANOVA also showed no significant difference seen between the three different groups in terms of amplitude. Conclusion: The binaural interaction component of auditory brainstem response can be used to evaluate the binaural interaction in symmetrical and asymmetrical hearing loss. This will be helpful to circumvent the effect of peripheral hearing loss in binaural processing of the auditory system. Additionally the test does not require any behavioral cooperation from the client, hence can be administered easily.", "title": "" } ]
scidocsrr
ded85abce2f0c11f8fcb7dec6f010e2a
Reinforcement Learning with A* and a Deep Heuristic
[ { "docid": "45940a48b86645041726120fb066a1fa", "text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.", "title": "" }, { "docid": "9cb033c92c06f804118381f61dd884f9", "text": "Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This significantly reduces the training time in feedforward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural networks. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empirically, we show that layer normalization can substantially reduce the training time compared with previously published techniques.", "title": "" } ]
[ { "docid": "be8b89fc46c919ab53abf86642bb8f8a", "text": "us to rethink our whole value frame concerning means and ends, and the place of technology within this frame. The ambit of HCI has expanded enormously since the field’s emergence in the early 1980s. Computing has changed significantly; mobile and ubiquitous communication networks span the globe, and technology has been integrated into all aspects of our daily lives. Computing is not simply for calculating, but rather is a medium through which we collaborate and interact with other people. The focus of HCI is not so much on human-computer interaction as it is on human activities mediated by computing [1]. Just as the original meaning of ACM (Association for Computing Machinery) has become dated, perhaps so too has the original meaning of HCI (humancomputer interaction). It is time for us to rethink how we approach issues of people and technology. In this article I explore how we might develop a more humancentered approach to computing. for the 21st century, centered on the exploration of new forms of living with and through technologies that give primacy to human actors, their values, and their activities. The area of concern is much broader than the simple “fit” between people and technology to improve productivity (as in the classic human factors mold); it encompasses a much more challenging territory that includes the goals and activities of people, their values, and the tools and environments that help shape their everyday lives. We have evermore sophisticated and complex technologies available to us in the home, at work, and on the go, yet in many cases, rather than augmenting our choices and capabilities, this plethora of new widgets and systems seems to confuse us—or even worse, disable us. (Surely there is something out of control when a term such as “IT disability” can be taken seriously in national research programs.) Solutions do not reside simply in ergonomic corrections to the interface, but instead require Some years ago, HCI researcher Panu Korhonen of Nokia outlined to me how HCI is changing, as follows: In the early days the Nokia HCI people were told “Please evaluate our user interface, and make it easy to use.” That gave way to “Please help us design this user interface so that it is easy to use.” That, in turn, led to a request: “Please help us find what the users really need so that we know how to design this user interface.” And now, the engineers are pleading with us: “Look at this area of", "title": "" }, { "docid": "1e18be7d7e121aa899c96cbcf5ea906b", "text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1", "title": "" }, { "docid": "16fa1af9571b623aa756d49fb269ecee", "text": "The subgraph isomorphism problem is one of the most important problems for pattern recognition in graphs. Its applications are found in many di®erent disciplines, including chemistry, medicine, and social network analysis. Because of the NP-completeness of the problem, the existing exact algorithms exhibit an exponential worst-case running time. In this paper, we propose several improvements to the well-known Ullmann's algorithm for the problem. The improvements lower the time consumption as well as the space requirements of the algorithm. We experimentally demonstrate the e±ciency of our improvement by comparing it to another set of improvements called FocusSearch, as well as other state-of-the-art algorithms, namely VF2 and LAD.", "title": "" }, { "docid": "f16f22302df99de531a2406ef9e024db", "text": "We propose a new hydrogenated amorphous silicon thin-film transistor (a-Si:H TFT) pixel circuit for an active matrix organic light-emitting diode (AMOLED) employing a voltage programming. The proposed a-Si:H TFT pixel circuit, which consists of five switching TFTs, one driving TFT, and one capacitor, successfully minimizes a decrease of OLED current caused by threshold voltage degradation of a-Si:H TFT and OLED. Our experimental results, based on the bias-temperature stress, exhibit that the output current for OLED is decreased by 7% in the proposed pixel, while it is decreased by 28% in the conventional 2-TFT pixel.", "title": "" }, { "docid": "158de7fe10f35a78e4b62d2bc46d9b0d", "text": "The Internet of Things promises ubiquitous connectivity of everything everywhere, which represents the biggest technology trend in the years to come. It is expected that by 2020 over 25 billion devices will be connected to cellular networks; far beyond the number of devices in current wireless networks. Machine-to-machine communications aims to provide the communication infrastructure for enabling IoT by facilitating the billions of multi-role devices to communicate with each other and with the underlying data transport infrastructure without, or with little, human intervention. Providing this infrastructure will require a dramatic shift from the current protocols mostly designed for human-to-human applications. This article reviews recent 3GPP solutions for enabling massive cellular IoT and investigates the random access strategies for M2M communications, which shows that cellular networks must evolve to handle the new ways in which devices will connect and communicate with the system. A massive non-orthogonal multiple access technique is then presented as a promising solution to support a massive number of IoT devices in cellular networks, where we also identify its practical challenges and future research directions.", "title": "" }, { "docid": "df487337795d03d8538024aedacbbbe9", "text": "This study aims to make an inquiry regarding the advantages and challenges of integrating augmented reality (AR) into the library orientation programs of academic/research libraries. With the vast number of emerging technologies that are currently being introduced to the library world, it is essential for academic librarians to fully utilize these technologies to their advantage. However, it is also of equal importance for them to first make careful analysis and research before deciding whether to adopt a certain technology or not. AR offers a strategic medium through which librarians can attach digital information to real-world objects and simply let patrons interact with them. It is a channel that librarians can utilize in order to disseminate information and guide patrons in their studies or researches. And while it is expected for AR to grow tremendously in the next few years, it becomes more inevitable for academic librarians to acquire related IT skills in order to further improve the services they offer in their respective colleges and universities. The study shall employ the pragmatic approach to research, conducting an extensive review of available literature on AR as used in academic libraries, designing a prototype to illustrate how AR can be integrated to an existing library orientation program, and performing surveys and interviews on patrons and librarians who used it. This study can serve as a guide in order for academic librarians to assess whether implementing AR in their respective libraries will be beneficial to them or not.", "title": "" }, { "docid": "13659d5f693129620132bf22e021ad70", "text": "Individuals with high functioning autism (HFA) or Asperger Syndrome (AS) exhibit difficulties in the knowledge or correct performance of social skills. This subgroup's social difficulties appear to be associated with deficits in three social cognition processes: theory of mind, emotion recognition and executive functioning. The current study outlines the development and initial administration of the group-based Social Competence Intervention (SCI), which targeted these deficits using cognitive behavioral principles. Across 27 students age 11-14 with a HFA/AS diagnosis, results indicated significant improvement on parent reports of social skills and executive functioning. Participants evidenced significant growth on direct assessments measuring facial expression recognition, theory of mind and problem solving. SCI appears promising, however, larger samples and application in naturalistic settings are warranted.", "title": "" }, { "docid": "9889cb9ae08cd177e6fa55c3ae7b8831", "text": "Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.", "title": "" }, { "docid": "4d56f134c2e2a597948bcf9b1cf37385", "text": "This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu.", "title": "" }, { "docid": "54041038352cf93f57d56153085a6f7c", "text": "This study seeks to evaluate how student information retention and comprehension can be influenced by their preferred note taking medium. One-hundred and nine college students watched lectures and took notes with an assigned medium: longhand or computer. Prior to watching the lectures, participants self-reported their preferred note taking medium. These lectures were pre-recorded and featured PowerPoint presentations containing information relating to the lecture. After the lectures, students were able to review their notes briefly before they engaged in activities unrelated to the lecture. They then took two tests based on the lecture material and completed a questionnaire further inquiring about their note taking tendencies. Tests contained two types of questions: conceptual and specific. A main effect of question type was found, with both computer and longhand note takers performing better on specific questions. Further, computer-preferred note takers who were forced to take notes by hand performed worst overall on the tests. Regardless of preference and question type, computer and longhand users performed equally well overall, and the interaction of medium and question type on test performance was not significant. For transcription tendencies, computer note takers generated more words and more 3-word verbatim sequences than longhand note takers. For note taking tendencies, the use of computer notes somewhat positively correlated with the use of no notes. The results of this study help to further understand how students’ preferred note taking medium can influence performance on subsequent tests.", "title": "" }, { "docid": "80966e593716f0533b53f20d070422b9", "text": "Figure 1: MAN for MDTC. The figure demonstrates the training on a mini-batch of data from one domain. One training iteration consists of one such mini-batch training from each domain. The parameters of Fs, Fd, C are updated together, and the training flows are illustrated by the green arrows. The parameters of D are updated separately, shown in red arrows. Solid lines indicate forward passes while dotted lines are backward passes. JD Fs is the domain loss for Fs, which is anticorrelated with JD (e.g. JD Fs = JD). (See §2,§3)", "title": "" }, { "docid": "92d3bb6142eafc9dc9f82ce6a766941a", "text": "The classical Rough Set Theory (RST) always generates too many rules, making it difficult for decision makers to choose a suitable rule. In this study, we use two processes (pre process and post process) to select suitable rules and to explore the relationship among attributes. In pre process, we propose a pruning process to select suitable rules by setting up a threshold on the support object of decision rules, to thereby solve the problem of too many rules. The post process used the formal concept analysis from these suitable rules to explore the attribute relationship and the most important factors affecting decision making for choosing behaviours of personal investment portfolios. In this study, we explored the main concepts (characteristics) for the conservative portfolio: the stable job, less than 4 working years, and the gender is male; the moderate portfolio: high school education, the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), the gender is male; and the aggressive portfolio: the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), less than 4 working years, and a stable job. The study result successfully explored the most important factors affecting the personal investment portfolios and the suitable rules that can help decision makers. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "127ef38020617fda8598971b3f10926f", "text": "Web services are important for creating distributed applications on the Web. In fact, they're a key enabler for service-oriented architectures that focus on service reuse and interoperability. The World Wide Web Consortium (W3C) has recently finished work on two important standards for describing Web services the Web Services Description Language (WSDL) 2.0 and Semantic Annotations for WSDL and XML Schema (SAWSDL). Here, the authors discuss the latter, which is the first standard for adding semantics to Web service descriptions.", "title": "" }, { "docid": "dde77ad495cf14c29ae4d18a1cf8e007", "text": "A novel differentially fed dual-band implantable antenna is proposed for the first time for a fully implantable neuro-microsystem. The antenna operates at two center frequencies of 433.9 MHz and 542.4 MHz, which are close to the 402-405 MHz medical implant communication services (MICS) band, to support sub-GHz wideband communication for high-data rate implantable neural recording application. The size of the antenna is 480.06 mm3 (27 mm × 14 mm × 1.27 mm). The simulated and measured bandwidths are 7.3% and 7.9% at the first resonant frequency, 5.4% and 6.4% at the second resonant frequency. The specific absorption rate (SAR) distribution induced by the implantable antenna inside a tissue-mimicking solution is evaluated. The performance of the communication link between the implanted antenna and external half-wavelength dual-band dipole is also examined.", "title": "" }, { "docid": "73e616ebf26c6af34edb0d60a0ce1773", "text": "While recent deep neural networks have achieved a promising performance on object recognition, they rely implicitly on the visual contents of the whole image. In this paper, we train deep neural networks on the foreground (object) and background (context) regions of images respectively. Considering human recognition in the same situations, networks trained on the pure background without objects achieves highly reasonable recognition performance that beats humans by a large margin if only given context. However, humans still outperform networks with pure object available, which indicates networks and human beings have different mechanisms in understanding an image. Furthermore, we straightforwardly combine multiple trained networks to explore different visual cues learned by different networks. Experiments show that useful visual hints can be explicitly learned separately and then combined to achieve higher performance, which verifies the advantages of the proposed framework.", "title": "" }, { "docid": "9a8918cb818a12c3da2e5d210d8b9d43", "text": "In the process of information system optimization and upgrading in the era of cloud computing, the number and variety of business requirements are increasingly complex, keeps sustained growth, the process of continuous integration delivery of information systems becomes increasingly complex, the amount of repetitive work is growing. This paper focuses on the continuous integration of specific information systems, a collaborative work scheme for continuous integrated delivery based on Jenkins and Ansible is proposed. Both theory and practice show that continuous integrated delivery cooperative systems can effectively improve the efficiency and quality of continuous integrated delivery of information systems. The effect of the optimization and upgrading of the information system is obvious.", "title": "" }, { "docid": "923363771ee11cc5b06917385f5832c0", "text": "This article presents a novel automatic method (AutoSummENG) for the evaluation of summarization systems, based on comparing the character n-gram graphs representation of the extracted summaries and a number of model summaries. The presented approach is language neutral, due to its statistical nature, and appears to hold a level of evaluation performance that matches and even exceeds other contemporary evaluation methods. Within this study, we measure the effectiveness of different representation methods, namely, word and character n-gram graph and histogram, different n-gram neighborhood indication methods as well as different comparison methods between the supplied representations. A theory for the a priori determination of the methods' parameters along with supporting experiments concludes the study to provide a complete alternative to existing methods concerning the automatic summary system evaluation process.", "title": "" }, { "docid": "a05b34697055678a607ab4db4d87fa07", "text": "This paper presents a novel set of image descriptors that encodes information from color, shape, spatial and local features of an image to improve upon the popular Pyramid of Histograms of Oriented Gradients (PHOG) descriptor for object and scene image classification. In particular, a new Gabor-PHOG (GPHOG) image descriptor created by enhancing the local features of an image using multiple Gabor filters is first introduced for feature extraction. Second, a comparative assessment of the classification performance of the GPHOG descriptor is made in grayscale and six different color spaces to further propose two novel color GPHOG descriptors that perform well on different object and scene image categories. Finally, an innovative Fused Color GPHOG (FC–GPHOG) descriptor is presented by integrating the Principal Component Analysis (PCA) features of the GPHOG descriptors in the six color spaces to combine color, shape and local feature information. Feature extraction for the proposed descriptors employs PCA and Enhanced Fisher Model (EFM), and the nearest neighbor rule is used for final classification. Experimental results using the MIT Scene dataset and the Caltech 256 object categories dataset show that the proposed new FC–GPHOG descriptor achieves a classification performance better than or comparable to other popular image descriptors, such as the Scale Invariant Feature Transform (SIFT) based Pyramid Histograms of visual Words descriptor, Color SIFT four Concentric Circles, Spatial Envelope, and Local Binary Patterns.", "title": "" }, { "docid": "8ba3a0a96213d3ee83b8f7ca91e33137", "text": "\"What other people think\" has always been an important piece of information for most of us during the decision-making process. Today people tend to make their opinions available to other people via the Internet. As a result, the Web has become an excellent source of consumer opinions. There are now numerous Web resources containing such opinions, e.g., product reviews forums, discussion groups, and blogs. But, it is really difficult for a customer to read all of the reviews and make an informed decision on whether to purchase the product. It is also difficult for the manufacturer of the product to keep track and manage customer opinions. Also, focusing on just user ratings (stars) is not a sufficient source of information for a user or the manufacturer to make decisions. Therefore, mining online reviews (opinion mining) has emerged as an interesting new research direction. Extracting aspects and the corresponding ratings is an important challenge in opinion mining. An aspect is an attribute or component of a product, e.g. 'zoom' for a digital camera. A rating is an intended interpretation of the user satisfaction in terms of numerical values. Reviewers usually express the rating of an aspect by a set of sentiments, e.g. 'great zoom'. In this tutorial we cover opinion mining in online product reviews with the focus on aspect-based opinion mining. This problem is a key task in the area of opinion mining and has attracted a lot of researchers in the information retrieval community recently. Several opinion related information retrieval tasks can benefit from the results of aspect-based opinion mining and therefore it is considered as a fundamental problem. This tutorial covers not only general opinion mining and retrieval tasks, but also state-of-the-art methods, challenges, applications, and also future research directions of aspect-based opinion mining.", "title": "" }, { "docid": "83742a3fcaed826877074343232be864", "text": "In this paper we propose a design of the main modulation and demodulation units of a modem compliant with the new DVB-S2 standard (Int. J. Satellite Commun. 2004; 22:249–268). A typical satellite channel model consistent with the targeted applications of the aforementioned standard is assumed. In particular, non-linear pre-compensation as well as synchronization techniques are described in detail and their performance assessed by means of analysis and computer simulations. The proposed algorithms are shown to provide a good trade-off between complexity and performance and they apply to both the broadcast and the unicast profiles, the latter allowing the exploitation of adaptive coding and modulation (ACM) (Proceedings of the 20th AIAA Satellite Communication Systems Conference, Montreal, AIAA-paper 2002-1863, May 2002). Finally, end-to-end system performances in term of BER versus the signal-to-noise ratio are shown as a result of extensive computer simulations. The whole communication chain is modelled in these simulations, including the BCH and LDPC coder, the modulator with the pre-distortion techniques, the satellite transponder model with its typical impairments, the downlink chain inclusive of the RF-front-end phase noise, the demodulator with the synchronization sub-system units and finally the LDPC and BCH decoders. Copyright # 2004 John Wiley & Sons, Ltd.", "title": "" } ]
scidocsrr
82c736b06c73d33bc16022397008a403
Optimizing Majority-Inverter Graphs with functional hashing
[ { "docid": "cc7033023e1c5a902dfa10c8346565c4", "text": "Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays, and uninterpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in various software verification and analysis applications.", "title": "" } ]
[ { "docid": "1ca70e99cf3dc1957627efc68af32e0c", "text": "In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.", "title": "" }, { "docid": "693e935d405b255ac86b8a9f5e7852a3", "text": "Recent developments have demonstrated the capacity of rest rict d Boltzmann machines (RBM) to be powerful generative models, able to extract useful featu r s from input data or construct deep artificial neural networks. In such settings, the RBM only yields a preprocessing or an initialization for some other model, instead of acting as a complete supervised model in its own right. In this paper, we argue that RBMs can provide a self-contained framework fo r developing competitive classifiers. We study the Classification RBM (ClassRBM), a variant on the R BM adapted to the classification setting. We study different strategies for training the Cla ssRBM and show that competitive classification performances can be reached when appropriately com bining discriminative and generative training objectives. Since training according to the gener ative objective requires the computation of a generally intractable gradient, we also compare differen t approaches to estimating this gradient and address the issue of obtaining such a gradient for proble ms with very high dimensional inputs. Finally, we describe how to adapt the ClassRBM to two special cases of classification problems, namely semi-supervised and multitask learning.", "title": "" }, { "docid": "386855a40950de7dc7f01f81f2df4750", "text": "Androgen control of penis development/growth is unclear. In rats, androgen action in a foetal 'masculinisation programming window' (MPW; e15.5-e18.5)' predetermines penile length and hypospadias occurrence. This has implications for humans (e.g. micropenis). Our studies aimed to establish in rats when androgen action/administration affects development/growth of the penis and if deficits in MPW androgen action were rescuable postnatally. Thus, pregnant rats were treated with flutamide during the MPW +/- postnatal testosterone propionate (TP) treatment. To assess penile growth responsiveness, rats were treated with TP in various time windows (late foetal, neonatal through early puberty, puberty onset, or combinations thereof). Phallus length, weight, and morphology, hypospadias and anogenital distance (AGD) were measured in mid-puberty (d25) or adulthood (d90) in males and females, plus serum testosterone in adult males. MPW flutamide exposure reduced adult penile length and induced hypospadias dose-dependently; this was not rescued by postnatal TP treatment. In normal rats, foetal (e14.5-e21.5) TP exposure did not affect male penis size but increased female clitoral size. In males, TP exposure from postnatal d1-24 or at puberty (d15-24), increased penile length at d25, but not ultimately in adulthood. Foetal + postnatal TP (e14-postnatal d24) increased penile size at d25 but reduced it at d90 (due to reduced endogenous testosterone). In females, this treatment caused the biggest increase in adult clitoral size but, unlike in males, phallus size was unaffected by TP during puberty (d15-24). Postnatal TP treatment advanced penile histology at d25 to more resemble adult histology. AGD strongly correlated with final penis length. It is concluded that adult penile size depends critically on androgen action during the MPW but subsequent growth depends on later androgen exposure. Foetal and/or postnatal TP exposure does not increase adult penile size above its 'predetermined' length though its growth towards this maximum is advanced by peripubertal TP treatment.", "title": "" }, { "docid": "2905229f5afba4958a57128f7a56db4c", "text": "This paper presents a novel approach to depth estimation using a multiple color-filter aperture (MCA) camera and its application to multifocusing. An image acquired by the MCA camera contains spatially varying misalignment among RGB color channels, where the direction and length of the misalignment is a function of the distance of an object from the plane of focus. Therefore, if the misalignment is estimated from the MCA output image, multifocusing and depth estimation become possible using a set of image processing algorithms. We first segment the image into multiple clusters having approximately uniform misalignment using a color-based region classification method, and then find a rectangular region that encloses each cluster. For each of the rectangular regions in the RGB color channels, color shifting vectors are estimated using a phase correlation method. After the set of three clusters are aligned in the opposite direction of the estimated color shifting vectors, the aligned clusters are fused to produce an approximately in-focus image. Because of the finite size of the color-filter apertures, the fused image still contains a certain amount of spatially varying out-of-focus blur, which is removed by using a truncated constrained least-squares filter followed by a spatially adaptive artifacts removing filter. Experimental results show that the MCA-based multifocusing method significantly enhances the visual quality of an image containing multiple objects of different distances, and can be fully or partially incorporated into multifocusing or extended depth of field systems. The MCA camera also realizes single camera-based depth estimation, where the displacement between multiple apertures plays a role of the baseline of a stereo vision system. Experimental results show that the estimated depth is accurate enough to perform a variety of vision-based tasks, such as image understanding, description, and robot vision.", "title": "" }, { "docid": "ea94a3c561476e88d5ac2640656a3f92", "text": "Point cloud is a basic description of discrete shape information. Parameterization of unorganized points is important for shape analysis and shape reconstruction of natural objects. In this paper we present a new algorithm for global parameterization of an unorganized point cloud and its application to the meshing of the cloud. Our method is guided by principal directions so as to preserve the intrinsic geometric properties. After initial estimation of principal directions, we develop a kNN(k-nearest neighbor) graph-based method to get a smooth direction field. Then the point cloud is cut to be topologically equivalent to a disk. The global parameterization is computed and its gradients align well with the guided direction field. A mixed integer solver is used to guarantee a seamless parameterization across the cut lines. The resultant parameterization can be used to triangulate and quadrangulate the point cloud simultaneously in a fully automatic manner, where the shape of the data is of any genus. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "57514ae31c792ed50677f39166cf5dd8", "text": "Rapid prototyping (RP) techniques are a group of advanced manufacturing processes that can produce custom made objects directly from computer data such as Computer Aided Design (CAD), Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) data. Using RP fabrication techniques, constructs with controllable and complex internal architecture with appropriate mechanical properties can be achieved. One of the attractive and promising utilization of RP techniques is related to tissue engineering (TE) scaffold fabrication. Tissue engineering scaffold is a 3D construction that acts as a template for tissue regeneration. Although several conventional techniques such as solvent casting and gas forming are utilized in scaffold fabrication; these processes show poor interconnectivity and uncontrollable porosity of the produced scaffolds. So, RP techniques become the best alternative fabri methods of TE scaffolds. This paper reviews the current state of the art in the area of tissue engineering scaffolds fabrication using advanced RP processes, as well as the current limitations and future trends in scaffold fabrication RP techniques. Keywords—Biomanufacturing, Rapid prototyping, Solid Free Form Fabrication, Scaffold Fabrication, Tissue Engineering", "title": "" }, { "docid": "9a2ab1d198468819f32a2b74334528ae", "text": "This paper introduces GeoSpark an in-memory cluster computing framework for processing large-scale spatial data. GeoSpark consists of three layers: Apache Spark Layer, Spatial RDD Layer and Spatial Query Processing Layer. Apache Spark Layer provides basic Spark functionalities that include loading / storing data to disk as well as regular RDD operations. Spatial RDD Layer consists of three novel Spatial Resilient Distributed Datasets (SRDDs) which extend regular Apache Spark RDDs to support geometrical and spatial objects. GeoSpark provides a geometrical operations library that accesses Spatial RDDs to perform basic geometrical operations (e.g., Overlap, Intersect). System users can leverage the newly defined SRDDs to effectively develop spatial data processing programs in Spark. The Spatial Query Processing Layer efficiently executes spatial query processing algorithms (e.g., Spatial Range, Join, KNN query) on SRDDs. GeoSpark also allows users to create a spatial index (e.g., R-tree, Quad-tree) that boosts spatial data processing performance in each SRDD partition. Preliminary experiments show that GeoSpark achieves better run time performance than its Hadoop-based counterparts (e.g., SpatialHadoop).", "title": "" }, { "docid": "f8a7146480ab150678095023ccde6088", "text": "The enterprise email promises to be a rich source for knowledge discovery. This is made possible due to the direct nature of communication, support for diverse media types, active participation of entities and presence of chronological ordering of messages. Also, the enterprise emails are more trustworthy than external emails due to their formal nature. This data source has not been fully tapped. In fact, the existing work on profiling of emails focuses primarily on expertise identification and retrieval. Even in these studies, the researchers have made some restrictive assumptions. For instance, in many of the formulations, the underlying system assumes a centralized data repository, and the communication network is complete. They do not account for individual biases in an email while mining and aggregating results. Furthermore, email holds fair amount of personal and organizational sensitive information. None of the existing work on email profiling suggests anything on alleviating the individual and organizational privacy concerns.\n In this paper, we propose a system for building an individual's perceived knowledge profile \"What she knows?\" ), trends profile \"In which direction and how far her expertise has grown?\" ), and team profile \"What all her teammates know?\"). The proposed system operates in a distributed network and performs analysis of emails residing on a time-varying local email database, with no prior assumptions about the environment. It also takes care of missing nodes in a partial communication network, by deducing their profile from perceived profiles of its peers and their common interest. We developed a two-pass aggregation algorithm for combining results from individual nodes and drawing useful insights. A graph based algorithm is used for calculating spread (reach) and popularity (recall) for further improving the output of the aggregation algorithm. The results show that the two pass aggregation step is sufficient in majority of the cases, and a hybrid of email content and graph-based approach works well in a distributed setup.", "title": "" }, { "docid": "0122b2fa61a4b29bd9a89a7e2c738e94", "text": "CONCEPTUALIZATION This content downloaded from 206.87.46.46 on Wed, 26 Mar 2014 12:01:22 PM All use subject to JSTOR Terms and Conditions 2005 Kolb and Kolb 199 Is learning style a fixed trait or dynamic state? ELT clearly defines learning style as a dynamic state arising from an individual's preferential resolution of the dual dialectics of experiencing/conceptualizing and acting/reflecting. The stability and endurance of these states in individuals comes not solely from fixed genetic qualities or characteristics of human beings: nor, for that matter, does it come from the stable fixed demands of environmental circumstances. Rather, stable and enduring patterns of human individuality arise from consistent patterns of transaction between the individual and his or her environment . . . The way we process the possibilities of each new emerging event determines the range of choices and decisions we see. The choices and decisions we make to some extent determine the events we live through, and these events influence our future choices. Thus, people create themselves through the choice of actual occasions they live through (Kolb 1984: 63-64). Nonetheless, in practice and research there is a marked tendency to treat learning style as a fixed personality trait (e.g., Garner, 2000). Individuals often refer to themselves and others as though learning style was a fixed characteristic: \"I have trouble making decisions because I am a diverger.\" \"He likes to work alone because he is an assimilator.\" To emphasize the dynamic nature of learning style, the latest version of the LSI has changed the style names from diverger to diverging, and so on.", "title": "" }, { "docid": "524ededc784d47f70844c9c57878e698", "text": "The business world is undergoing a revolution driven by the use of data and analytics to guide decision-making. While many forces are at work, a major reason for the business analytics revolution is the rapid proliferation of the amount of data available to be analysed. Recent days, big data is beginning to have a major impact on air travel with more data being created both through the plane sensors and the passengers on board; the opportunities to use this data will only increase. It provides innovative companies with the opportunity to improve major aspects of their business, from using data to improve customer retention through to making planes safer and more reliable. In this paper we discuss a big data concept, definitions, and further present some cases for aviation industry to analyse data from every conceivable channel, for instance, customer data to create a unique profile for each customer based on a wide range of demographic data, behaviours, and preferences.", "title": "" }, { "docid": "7968e0f2960a7dce6017699fd1222e36", "text": "This work investigates the role of contrasting discourse relations signaled by cue phrases, together with phrase positional information, in predicting sentiment at the phrase level. Two domains of online reviews were chosen. The first domain is of nutritional supplement reviews, which are often poorly structured yet also allow certain simplifying assumptions to be made. The second domain is of hotel reviews, which have somewhat different characteristics. A corpus is built from these reviews, and manually tagged for polarity. We propose and evaluate a few new features that are realized through a lightweight method of discourse analysis, and use these features in a hybrid lexicon and machine learning based classifier. Our results show that these features may be used to obtain an improvement in classification accuracy compared to other traditional machine learning approaches.", "title": "" }, { "docid": "1198cdc5009a2471587bd6ec4e53625a", "text": "We introduce a new form of game search called partition search that incorporates dependency analysis, allowing substantial reductions in the portion of the tree that needs to be expanded. Both theoretical results and experimental data are presented. For the game of bridge, partition search provides approximately as much of an improvement over existing methods as a-0 pruning provides over minimax.", "title": "" }, { "docid": "95ce29669a80325d7ae664c9ac413c6b", "text": "Most word embedding methods are proposed with general purpose which take a word as a basic unit and learn embeddings according to words' external contexts. However, in biomedical text mining, there are many biomedical entities and syntactic chunks which contain rich domain information, and the semantic meaning of a word is also strongly related to those information. Hence, we present a biomedical domain-specific word embedding model by incorporating stem, chunk and entity to train word embeddings. We also present two deep learning architectures respectively for two biomedical text mining tasks, by which we evaluate our word embeddings and compare them with other models. Experimental results show that our biomedical domain-specific word embeddings overall outperform other general-purpose word embeddings in these deep learning methods for biomedical text mining tasks.", "title": "" }, { "docid": "460b8f82e5c378c7d866d92339e14572", "text": "When the number of projections does not satisfy the Shannon/Nyquist sampling requirement, streaking artifacts are inevitable in x-ray computed tomography (CT) images reconstructed using filtered backprojection algorithms. In this letter, the spatial-temporal correlations in dynamic CT imaging have been exploited to sparsify dynamic CT image sequences and the newly proposed compressed sensing (CS) reconstruction method is applied to reconstruct the target image sequences. A prior image reconstructed from the union of interleaved dynamical data sets is utilized to constrain the CS image reconstruction for the individual time frames. This method is referred to as prior image constrained compressed sensing (PICCS). In vivo experimental animal studies were conducted to validate the PICCS algorithm, and the results indicate that PICCS enables accurate reconstruction of dynamic CT images using about 20 view angles, which corresponds to an under-sampling factor of 32. This undersampling factor implies a potential radiation dose reduction by a factor of 32 in myocardial CT perfusion imaging.", "title": "" }, { "docid": "cdee55e977d5809b87f3e8be98acaaa3", "text": "Proximity effects caused by uneven distribution of current among the insulated wire strands of stator multi-strand windings can contribute significant bundle-level proximity losses in permanent magnet (PM) machines operating at high speeds. Three-dimensional finite element analysis is used to investigate the effects of transposition of the insulated strands in stator winding bundles on the copper losses in high-speed machines. The investigation confirms that the bundle proximity losses must be considered in the design of stator windings for high-speed machines, and the amplitude of these losses decreases monotonically as the level of transposition is increased from untransposed to fully-transposed (360°) wire bundles. Analytical models are introduced to estimate the currents in strands in a slot for a high-speed machine.", "title": "" }, { "docid": "7786f9d10349db69a9cc0dada7a17252", "text": "Unsupervised word embeddings have been shown to be valuable as features in supervised learning problems; however, their role in unsupervised problems has been less thoroughly explored. In this paper, we show that embeddings can likewise add value to the problem of unsupervised POS induction. In two representative models of POS induction, we replace multinomial distributions over the vocabulary with multivariate Gaussian distributions over word embeddings and observe consistent improvements in eight languages. We also analyze the effect of various choices while inducing word embeddings on “downstream” POS induction results.", "title": "" }, { "docid": "6036b43cc4e8cd560645355a544eca80", "text": "Recent attempts to achieve fairness in predictive models focus on the balance between fairness and accuracy. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error could have devastating consequences. In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model. We decompose cost-based metrics of discrimination into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Finally, we perform case-studies on prediction of income, mortality, and review ratings, confirming the value of this analysis. We find that data collection is often a means to reduce discrimination without sacrificing accuracy.", "title": "" }, { "docid": "a2e597c8e4ff156eaa72a4981b81df8d", "text": "OBJECTIVE\nAggregation and deposition of amyloid beta (Abeta) in the brain is thought to be central to the pathogenesis of Alzheimer's disease (AD). Recent studies suggest that cerebrospinal fluid (CSF) Abeta levels are strongly correlated with AD status and progression, and may be a meaningful endophenotype for AD. Mutations in presenilin 1 (PSEN1) are known to cause AD and change Abeta levels. In this study, we have investigated DNA sequence variation in the presenilin (PSEN1) gene using CSF Abeta levels as an endophenotype for AD.\n\n\nMETHODS\nWe sequenced the exons and flanking intronic regions of PSEN1 in clinically characterized research subjects with extreme values of CSF Abeta levels.\n\n\nRESULTS\nThis novel approach led directly to the identification of a disease-causing mutation in a family with late-onset AD.\n\n\nINTERPRETATION\nThis finding suggests that CSF Abeta may be a useful endophenotype for genetic studies of AD. Our results also suggest that PSEN1 mutations can cause AD with a large range in age of onset, spanning both early- and late-onset AD.", "title": "" }, { "docid": "9736331d674470adbe534503ef452cca", "text": "In this paper we present our system for human-in-theloop video object segmentation. The backbone of our system is a method for one-shot video object segmentation [3]. While fast, this method requires an accurate pixel-level segmentation of one (or several) frames as input. As manually annotating such a segmentation is impractical, we propose a deep interactive image segmentation method, that can accurately segment objects with only a handful of clicks. On the GrabCut dataset, our method obtains 90% IOU with just 3.8 clicks on average, setting the new state of the art. Furthermore, as our method iteratively refines an initial segmentation, it can effectively correct frames where the video object segmentation fails, thus allowing users to quickly obtain high quality results even on challenging sequences. Finally, we investigate usage patterns and give insights in how many steps users take to annotate frames, what kind of corrections they provide, etc., thus giving important insights for further improving interactive video segmentation.", "title": "" } ]
scidocsrr
358d9a6b55efcf3ec7ee5a024264f2bd
Detecting Silicone Mask-Based Presentation Attack via Deep Dictionary Learning
[ { "docid": "c1f6052ecf802f1b4b2e9fd515d7ea15", "text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.", "title": "" }, { "docid": "fe33ff51ca55bf745bdcdf8ee02e2d36", "text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.", "title": "" } ]
[ { "docid": "7b3dd8bdc75bf99f358ef58b2d56e570", "text": "This paper studies asset allocation decisions in the presence of regime switching in asset returns. We find evidence that four separate regimes characterized as crash, slow growth, bull and recovery states are required to capture the joint distribution of stock and bond returns. Optimal asset allocations vary considerably across these states and change over time as investors revise their estimates of the state probabilities. In the crash state, buy-and-hold investors allocate more of their portfolio to stocks the longer their investment horizon, while the optimal allocation to stocks declines as a function of the investment horizon in bull markets. The joint effects of learning about state probabilities and predictability of asset returns from the dividend yield give rise to a non-monotonic relationship between the investment horizon and the demand for stocks. Welfare costs from ignoring regime switching can be substantial even after accounting for parameter uncertainty. Out-of-sample forecasting experiments confirm the economic importance of accounting for the presence of regimes in asset returns.", "title": "" }, { "docid": "587253c0196c15c918178b42e25f3180", "text": "Deep Learning methods are currently the state-of-the-art in many Computer Vision and Image Processing problems, in particular image classification. After years of intensive investigation, a few models matured and became important tools, including Convolutional Neural Networks (CNNs), Siamese and Triplet Networks, Auto-Encoders (AEs) and Generative Adversarial Networks (GANs). The field is fast-paced and there is a lot of terminologies to catch up for those who want to adventure in Deep Learning waters. This paper has the objective to introduce the most fundamental concepts of Deep Learning for Computer Vision in particular CNNs, AEs and GANs, including architectures, inner workings and optimization. We offer an updated description of the theoretical and practical knowledge of working with those models. After that, we describe Siamese and Triplet Networks, not often covered in tutorial papers, as well as review the literature on recent and exciting topics such as visual stylization, pixel-wise prediction and video processing. Finally, we discuss the limitations of Deep Learning for Computer Vision.", "title": "" }, { "docid": "9b37cc1d96d9a24e500c572fa2cb339a", "text": "Site-based or topic-specific search engines work with mixed success because of the general difficulty of the information retrieval task, and the lack of good link information to allow authorities to be identified. We are advocating an open source approach to the problem due to its scope and need for software components. We have adopted a topic-based search engine because it represents the next generation of capability. This paper outlines our scalable system for site-based or topic-specific search, and demonstrates the developing system on a small 250,000 document collection of EU and UN web pages.", "title": "" }, { "docid": "84c9dfb5643e954120b22c7d7aca6e28", "text": "Zika virus (ZIKV), a previously obscure flavivirus closely related to dengue, West Nile, Japanese encephalitis and yellow fever viruses, has emerged explosively since 2007 to cause a series of epidemics in Micronesia, the South Pacific, and most recently the Americas. After its putative evolution in sub-Saharan Africa, ZIKV spread in the distant past to Asia and has probably emerged on multiple occasions into urban transmission cycles involving Aedes (Stegomyia) spp. mosquitoes and human amplification hosts, accompanied by a relatively mild dengue-like illness. The unprecedented numbers of people infected during recent outbreaks in the South Pacific and the Americas may have resulted in enough ZIKV infections to notice relatively rare congenital microcephaly and Guillain-Barré syndromes. Another hypothesis is that phenotypic changes in Asian lineage ZIKV strains led to these disease outcomes. Here, we review potential strategies to control the ongoing outbreak through vector-centric approaches as well as the prospects for the development of vaccines and therapeutics.", "title": "" }, { "docid": "c004860f6a093ca3f712471c4f6264b9", "text": "This article presents a tutorial overview of models for estimating the quality experienced by users of speech transmission and communication services. Such models can be classified as either parametric or signal based. Signal-based models use input speech signals measured at the electrical or acoustic interfaces of the transmission channel. Parametric models, on the other hand, depend on signal and system parameters estimated during network planning or at run time. This tutorial describes the underlying principles as well as advantages and limitations of existing models. It also presents new developments, thus serving as a guide to an appropriate usage of the multitude of current and emerging speech quality models.", "title": "" }, { "docid": "8de530a30b8352e36b72f3436f47ffb2", "text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.", "title": "" }, { "docid": "62ae3e882e43d279e384b9be8bd9a839", "text": "We propose a generic point cloud encoder that compresses geometry data including positions and normals of point samples corresponding to 3D objects with arbitrary topology. In this work, the coding process is led by an iterative octree cell subdivision of the object space. At each level of subdivision, positions of point samples are approximated by the geometry centers of all tree-front cells while normals are approximated by their statistical average within each of the tree-front cells. With this framework, we employ attribute-dependent encoding techniques to exploit different characteristics of various attributes. As a result, significant improvement in the rate-distortion (R-D) performance has been obtained with respect to the prior art. Furthermore, the proposed point cloud encoder can be potentially used for lossless geometry coding of 3D point clouds, given sufficient levels of octree expansion and normal space partitioning.", "title": "" }, { "docid": "c6aaacf5207f561f70b7ec6c738bb5f0", "text": "Skeletal bone age assessment is a common clinical practice to diagnose endocrine and metabolic disorders in child development. In this paper, we describe a fully automated deep learning approach to the problem of bone age assessment using data from the 2017 Pediatric Bone Age Challenge organized by the Radiological Society of North America. The dataset for this competition consists of 12,600 radiological images. Each radiograph in this dataset is an image of a left hand labeled with bone age and sex of a patient. Our approach utilizes several deep neural network architectures trained end-to-end. We use images of whole hands as well as specific parts of a hand for both training and prediction. This approach allows us to measure the importance of specific hand bones for automated bone age analysis. We further evaluate the performance of the suggested method in the context of skeletal development stages. Our approach outperforms other common methods for bone age assessment.", "title": "" }, { "docid": "53fcf4f5285b7a93d99d2c222dfe21dd", "text": "OBJECTIVES\nTo determine whether the use of a near-infrared light venipuncture aid (VeinViewer; Luminetx Corporation, Memphis, Tenn) would improve the rate of successful first-attempt placement of intravenous (IV) catheters in a high-volume pediatric emergency department (ED).\n\n\nMETHODS\nPatients younger than 20 years with standard clinical indications for IV access were randomized to have IV placement by ED nurses (in 3 groups stratified by 5-year blocks of nursing experience) using traditional methods (standard group) or with the aid of the near-infrared light source (device group). If a vein could not be cannulated after 3 attempts, patients crossed over from one study arm to the other, and study nurses attempted placement with the alternative technique. The primary end point was first-attempt success rate for IV catheter placement. After completion of patient enrollment, a questionnaire was completed by study nurses as a qualitative assessment of the device.\n\n\nRESULTS\nA total of 123 patients (median age, 3 years) were included in the study: 62 in the standard group and 61 in the device group. There was no significant difference in first-attempt success rate between the standard (79.0%, 95% confidence interval [CI], 66.8%-88.3%) and device (72.1%, 95% CI, 59.2%-82.9%) groups. Of the 19 study nurses, 14 completed the questionnaire of whom 70% expressed neutral or unfavorable assessments of the device in nondehydrated patients without chronic underlying medical conditions and 90% found the device a helpful tool for patients in whom IV access was difficult.\n\n\nCONCLUSIONS\nFirst-attempt success rate for IV placement was nonsignificantly higher without than with the assistance of a near-infrared light device in a high-volume pediatric ED. Nurses placing IVs did report several benefits to use of the device with specific patient groups, and future research should be conducted to demonstrate the role of the device in these patients.", "title": "" }, { "docid": "cb266f07461a58493d35f75949c4605e", "text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.", "title": "" }, { "docid": "b51fcfa32dbcdcbcc49f1635b44601ed", "text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.", "title": "" }, { "docid": "f5a7a7b2848bb2d8cd230650c19f74f4", "text": "A CMOS image sensor capable of imaging and energy harvesting on same focal plane is presented for retinal prosthesis. The energy harvesting and imaging (EHI) active pixel sensor (APS) imager was designed, fabricated, and tested in a standard 0.5 μm CMOS process. It has 54 × 50 array of 21 × 21 μm2 EHI pixels, 10-bit supply boosted (SB) SAR ADC, and charge pump circuits consuming only 14.25 μW from 1.2 V and running at 7.4 frames per second. The supply boosting technique (SBT) is used in an analog signal chain of the EHI imager. Harvested solar energy on focal plane is stored on an off-chip capacitor with the help of a charge pump circuit with better than 70% efficiency. Energy harvesting efficiency of the EHI pixel was measured at different light levels. It was 9.4% while producing 0.41 V open circuit voltage. The EHI imager delivers 3.35 μW of power was delivered to a resistive load at maximum power point operation. The measured pixel array figure of merit (FoM) was 1.32 pW/frame/pixel while imager figure of merit (iFoM) including whole chip power consumption was 696 fJ/pixel/code for the EHI imager.", "title": "" }, { "docid": "6d65238e93aa1a9a0e5e522af8ecb2e0", "text": "We introduce end-to-end neural network based models for simulating users of task-oriented dialogue systems. User simulation in dialogue systems is crucial from two different perspectives: (i) automatic evaluation of different dialogue models, and (ii) training task-oriented dialogue systems. We design a hierarchical sequence-to-sequence model that first encodes the initial user goal and system turns into fixed length representations using Recurrent Neural Networks (RNN). It then encodes the dialogue history using another RNN layer. At each turn, user responses are decoded from the hidden representations of the dialogue level RNN. This hierarchical user simulator (HUS) approach allows the model to capture undiscovered parts of the user goal without the need of an explicit dialogue state tracking. We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal. We evaluate the proposed models on movie ticket booking domain by systematically interacting each user simulator with various dialogue system policies trained with different objectives and users.", "title": "" }, { "docid": "ad57044935e65f144a5d718844672b2c", "text": "DeLone and McLean’s (1992) model of information systems success has received much attention amongst researchers. This study provides the first empirical test of the entire DeLone and McLean model in the user developed application domain. Overall, the model was not supported by the data. Of the nine hypothesised relationships tested four were found to be significant and the remainder not significant. The model provided strong support for the relationships between perceived system quality and user satisfaction, perceived information quality and user satisfaction, user satisfaction and intended use, and user satisfaction and perceived individual impact.", "title": "" }, { "docid": "d2daabf75e8882f0feefdbb602e50fd8", "text": "In order to improve the security performance of multiuser visible light communication (VLC) and facilitate the secure application of optical wireless communication technology in Internet-of-Things, we investigate the physical-layer security in a multiuser VLC system with non-orthogonal multiple access (NOMA). When the light-emitting diode (LED) transmitter communicates with multiple legitimate users by downlink NOMA, both single eavesdropper and multi-eavesdropper scenarios are considered. In the presence of single eavesdropper, based on transmission characteristics of the optical wireless channel, with known instantaneous channel state information (CSI) of the NOMA legitimate channels and statistical CSI of the eavesdropper channel, an exact expression of secrecy outage probability (SOP) is derived, which acts as a benchmark of the security performance to guide selecting or optimizing parameters of the LED transmitter and the photodiode receiver of NOMA legitimate users. In the multi-eavesdropper case, based on the spatial distribution of legitimate users and eavesdroppers, the SOP is obtained via a stochastic geometry theory, so as to guide the NOMA legitimate users to keep away from the area with high eavesdropper density. For typical parameters of the indoor LED transmitter and the PD receiver, simulation results show that the SOP performance improves with the increasing of LED transmission power or transmission signal-to-noise ratio (SNR) in both scenarios. Specifically, in the single eavesdropper case, enlarging the channel condition difference of user groups or deviating the eavesdropper from the given user group can improve the SOP performance, and for a given NOMA legitimate user, the SOP eventually settles around 0.2 while the semi-angle at half illuminance of the LED varies between 15° to 60°. In the multi-eavesdropper case, we can get a better SOP performance when reducing the eavesdropper density or the semi-angle at half illuminance of the LED for a given eavesdropper density.", "title": "" }, { "docid": "28f220e88b9b2947c8203d83210f77d0", "text": "Designers frequently draw curvature lines to convey bending of smooth surfaces in concept sketches. We present a method to extrapolate curvature lines in a rough concept sketch, recovering the intended 3D curvature field and surface normal at each pixel of the sketch. This 3D information allows to enrich the sketch with 3D-looking shading and texturing.\n We first introduce the concept of regularized curvature lines that model the lines designers draw over curved surfaces, encompassing curvature lines and their extension as geodesics over flat or umbilical regions. We build on this concept to define the orthogonal cross field that assigns two regularized curvature lines to each point of a 3D surface. Our algorithm first estimates the projection of this cross field in the drawing, which is nonorthogonal due to foreshortening. We formulate this estimation as a scattered interpolation of the strokes drawn in the sketch, which makes our method robust to sketchy lines that are typical for design sketches. Our interpolation relies on a novel smoothness energy that we derive from our definition of regularized curvature lines. Optimizing this energy subject to the stroke constraints produces a dense nonorthogonal 2D cross field which we then lift to 3D by imposing orthogonality. Thus, one central concept of our approach is the generalization of existing cross field algorithms to the nonorthogonal case.\n We demonstrate our algorithm on a variety of concept sketches with various levels of sketchiness. We also compare our approach with existing work that takes clean vector drawings as input.", "title": "" }, { "docid": "f249a6089a789e52eeadc8ae16213bc1", "text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.", "title": "" }, { "docid": "db4b6a75db968868630720f7955d9211", "text": "Bots have been playing a crucial role in online platform ecosystems, as efficient and automatic tools to generate content and diffuse information to the social media human population. In this chapter, we will discuss the role of social bots in content spreading dynamics in social media. In particular, we will first investigate some differences between diffusion dynamics of content generated by bots, as opposed to humans, in the context of political communication, then study the characteristics of bots behind the diffusion dynamics of social media spam campaigns.", "title": "" }, { "docid": "3d04155f68912f84b02788f93e9da74c", "text": "Data partitioning significantly improves the query performance in distributed database systems. A large number of techniques have been proposed to efficiently partition a dataset for a given query workload. However, many modern analytic applications involve ad-hoc or exploratory analysis where users do not have a representative query workload upfront. Furthermore, workloads change over time as businesses evolve or as analysts gain better understanding of their data. Static workload-based data partitioning techniques are therefore not suitable for such settings. In this paper, we describe the demonstration of Amoeba, a distributed storage system which uses adaptive multi-attribute data partitioning to efficiently support ad-hoc as well as recurring queries. Amoeba applies a robust partitioning algorithm such that ad-hoc queries on all attributes have similar performance gains. Thereafter, Amoeba adaptively repartitions the data based on the observed query sequence, i.e., the system improves over time. All along Amoeba offers both adaptivity (i.e., adjustments according to workload changes) as well as robustness (i.e., avoiding performance spikes due to workload changes). We propose to demonstrate Amoeba on scenarios from an internet-ofthings startup that tracks user driving patterns. We invite the audience to interactively fire fast ad-hoc queries, observe multi-dimensional adaptivity, and play with a robust/reactive knob in Amoeba. The web front end displays the layout changes, runtime costs, and compares it to Spark with both default and workload-aware partitioning.", "title": "" }, { "docid": "22577959a42a8a4e8e1af7f88542c6ff", "text": "Cooperative problem-solving systems are computer-based systems that augment a person's ability to create, reflect, design, decide, and reason. Our work focuses on supporting cooperative problem solving in the context of high-functionality computer systems. We show how the conceptual framework behind a given system determines crucial aspects of the system's behavior. Several systems are described that attempted to address specific shortcomings of prevailing assumptions, resulting in a new conceptual framework. To further test this resulting framework, we conducted an empirical study of a success model of cooperative problem solving between people in a large hardware store. The conceptual framework is instantiated in a number of new system-building efforts, which are described and discussed.", "title": "" } ]
scidocsrr
b1931d8d6117af3c412d44a0d231b719
A review of data mining applications in crime
[ { "docid": "4f31b16c53632e2d1ae874a692e5b64e", "text": "Previously published algorithms for finding the longest common subsequence of two sequences of length n have had a best-case running time of O(n2). An algorithm for this problem is presented which has a running time of O((r + n) log n), where r is the total number of ordered pairs of positions at which the two sequences match. Thus in the worst case the algorithm has a running time of O(n2 log n). However, for those applications where most positions of one sequence match relatively few positions in the other sequence, a running time of O(n log n) can be expected.", "title": "" }, { "docid": "9d9f2129d20267a2e2e8c18976750eab", "text": "Blogs, often treated as the equivalence of online personal diaries, have become one of the fastest growing types of Web-based media. Everyone is free to express their opinions and emotions very easily through blogs. In the blogosphere, many communities have emerged, which include hate groups and racists that are trying to share their ideology, express their views, or recruit new group members. It is important to analyze these virtual communities, defined based on membership and subscription linkages, in order to monitor for activities that are potentially harmful to society. While many Web mining and network analysis techniques have been used to analyze the content and structure of the Web sites of hate groups on the Internet, these techniques have not been applied to the study of hate groups in blogs. To address this issue, we have proposed a semi-automated approach in this research. The proposed approach consists of four modules, namely blog spider, information extraction, network analysis, and visualization. We applied this approach to identify and analyze a selected set of 28 anti-Blacks hate groups (820 bloggers) on Xanga, one of the most popular blog hosting sites. Our analysis results revealed some interesting demographical and topological characteristics in these groups, and identified at least two large communities on top of the smaller ones. The study also demonstrated the feasibility in applying the proposed approach in the study of hate groups and other related communities in blogs. r 2006 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "c99b28aefb425a076dac01b2a0861087", "text": "Theoretical, psychoanalytical constructs referring to the unconscious, the superego, and id, enjoy an autonomy within the I. As such, this study contemplates the discussion of these foreign entities that inhabit the interior of the I, producing an effect of foreignness. In the first section, I will develop a reflection on the state of foreignness of the unconscious. I will begin with an analogy used by Freud, which addresses the thesis of universality of consciousness with the psychoanalytical thesis of the subconscience within the I. Affirmation of consciousness in the other may be used analogously for affirm the idea of another inhabiting our own being. I shall continue, seeking to understand how the process of unconscious repression produces the effect of foreignness. The idea of a moral censor present in the entity of the superego constitutes the theme of the second section. The superego follows the principle of otherness in its constitution and in its effects on the I. Finally, a reflection on the dimension of otherness in the Id seems urgent to me, as with this concept, Freud radicalized in the idea of the foreign as the origin of the subject.", "title": "" }, { "docid": "39c1b53047e4314073312741a39c7e5c", "text": "We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU-Depth-V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the state-of-the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.", "title": "" }, { "docid": "59c4e899795b82d103433c1bf6e12243", "text": "A few troublemakers often spoil online environments for everyone else. An extremely disruptive type of abuser is the troll, whose malicious activities are relatively non-obvious, and thus difficult to detect and contain -- particularly by automated systems. A growing corpus of qualitative research focuses on trolling, and differentiates it from other forms of abuse; however, its findings are not directly actionable into automated systems. On the other hand, quantitative research uses definitions of \"troll\" that mostly fail to capture what moderators and users consider trolling. We address this gap by giving a quantitative analysis of posts, conversations, and users, specifically sanctioned for trolling in an online forum. Although trolls (unlike most other abusers) hardly stand out in a conversation e.g. in terms of vocabulary, textit{how} they interact, rather than textit{what} they contribute, provides cues of their malicious intent.", "title": "" }, { "docid": "de4c44363fd6bb6da7ec0c9efd752213", "text": "Modeling the structure of coherent texts is a task of great importance in NLP. The task of organizing a given set of sentences into a coherent order has been commonly used to build and evaluate models that understand such structure. In this work we propose an end-to-end neural approach based on the recently proposed set to sequence mapping framework to address the sentence ordering problem. Our model achieves state-of-the-art performance in the order discrimination task on two datasets widely used in the literature. We also consider a new interesting task of ordering abstracts from conference papers and research proposals and demonstrate strong performance against recent methods. Visualizing the sentence representations learned by the model shows that the model has captured high level logical structure in these paragraphs. The model also learns rich semantic sentence representations by learning to order texts, performing comparably to recent unsupervised representation learning methods in the sentence similarity and paraphrase detection tasks.", "title": "" }, { "docid": "39debcb0aa41eec73ff63a4e774f36fd", "text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.", "title": "" }, { "docid": "458825b414f51a164a244ce712c85e2c", "text": "Animal communication is a dynamic field that promotes cross-disciplinary study of the complex mechanisms of sending and receiving signals, the neurobiology of signal detection and processing, and the behaviors of animals creating and responding to encoded messages. Alongside visual signals, songs, or pheromones exists another major communication channel that has been rather neglected until recent decades: substrate-borne vibration. Vibrations carried in the substrate are considered to provide a very old and apparently ubiquitous communication channel that is used alone or in combination with other information channels in multimodal signaling. The substrate could be ‘the ground’, or a plant leaf or stem, or the surface of water, or a spider’s web, or a honeybee’s honeycomb. Animals moving on these substrates typically create incidental vibrations that can alert others to their presence. They also may use behaviors to create vibrational waves that are employed in the contexts of mate location and identification, courtship and mating, maternal care and sibling interactions, predation, predator avoidance, foraging, and general recruitment of family members to work. In fact, animals use substrate-borne vibrations to signal in the same contexts that they use vision, hearing, touch, taste, or smell. Study of vibrational communication across animal taxa provides more than just a more complete story. Communication through substrate-borne vibration has its own constraints and opportunities not found in other signaling modalities. Here, I review the state of our understanding of information acquisition via substrate-borne vibrations with special attention to the most recent literature.", "title": "" }, { "docid": "27d7f7935c235a3631fba6e3df08f623", "text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.", "title": "" }, { "docid": "ce1b4c5e15fd1d0777c26ca93a9cadbd", "text": "In early studies on energy metabolism of tumor cells, it was proposed that the enhanced glycolysis was induced by a decreased oxidative phosphorylation. Since then it has been indiscriminately applied to all types of tumor cells that the ATP supply is mainly or only provided by glycolysis, without an appropriate experimental evaluation. In this review, the different genetic and biochemical mechanisms by which tumor cells achieve an enhanced glycolytic flux are analyzed. Furthermore, the proposed mechanisms that arguably lead to a decreased oxidative phosphorylation in tumor cells are discussed. As the O(2) concentration in hypoxic regions of tumors seems not to be limiting for the functioning of oxidative phosphorylation, this pathway is re-evaluated regarding oxidizable substrate utilization and its contribution to ATP supply versus glycolysis. In the tumor cell lines where the oxidative metabolism prevails over the glycolytic metabolism for ATP supply, the flux control distribution of both pathways is described. The effect of glycolytic and mitochondrial drugs on tumor energy metabolism and cellular proliferation is described and discussed. Similarly, the energy metabolic changes associated with inherent and acquired resistance to radiotherapy and chemotherapy of tumor cells, and those determined by positron emission tomography, are revised. It is proposed that energy metabolism may be an alternative therapeutic target for both hypoxic (glycolytic) and oxidative tumors.", "title": "" }, { "docid": "22ad829acba8d8a0909f2b8e31c1f0c3", "text": "Covariance matrices capture correlations that are invaluable in modeling real-life datasets. Using all d elements of the covariance (in d dimensions) is costly and could result in over-fitting; and the simple diagonal approximation can be over-restrictive. In this work, we present a new model, the Low-Rank Gaussian Mixture Model (LRGMM), for modeling data which can be extended to identifying partitions or overlapping clusters. The curse of dimensionality that arises in calculating the covariance matrices of the GMM is countered by using low-rank perturbed diagonal matrices. The efficiency is comparable to the diagonal approximation, yet one can capture correlations among the dimensions. Our experiments reveal the LRGMM to be an efficient and highly applicable tool for working with large high-dimensional datasets.", "title": "" }, { "docid": "d03e2aa41bd345bcb559d6e9c1cef393", "text": "This paper develops a coarse-to-fine framework for single-image super-resolution (SR) reconstruction. The coarse-to-fine approach achieves high-quality SR recovery based on the complementary properties of both example learning-and reconstruction-based algorithms: example learning-based SR approaches are useful for generating plausible details from external exemplars but poor at suppressing aliasing artifacts, while reconstruction-based SR methods are propitious for preserving sharp edges yet fail to generate fine details. In the coarse stage of the method, we use a set of simple yet effective mapping functions, learned via correlative neighbor regression of grouped low-resolution (LR) to high-resolution (HR) dictionary atoms, to synthesize an initial SR estimate with particularly low computational cost. In the fine stage, we devise an effective regularization term that seamlessly integrates the properties of local structural regularity, nonlocal self-similarity, and collaborative representation over relevant atoms in a learned HR dictionary, to further improve the visual quality of the initial SR estimation obtained in the coarse stage. The experimental results indicate that our method outperforms other state-of-the-art methods for producing high-quality images despite that both the initial SR estimation and the followed enhancement are cheap to implement.", "title": "" }, { "docid": "ccd27b6cc1d5900deb86a72d535a66f5", "text": "We introduce a new class of problems callednetwork information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be mulitcast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. In this paper, we study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the Max-flow Min-cut Theorem for network information flow. Contrary to one’s intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a “fluid” which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to asnetwork coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.", "title": "" }, { "docid": "fcf7f7562fe3e01bba64a61b7f54b04c", "text": "IMPORTANCE\nBoth bullies and victims of bullying are at risk for psychiatric problems in childhood, but it is unclear if this elevated risk extends into early adulthood.\n\n\nOBJECTIVE\nTo test whether bullying and/or being bullied in childhood predicts psychiatric problems and suicidality in young adulthood after accounting for childhood psychiatric problems and family hardships.\n\n\nDESIGN\nProspective, population-based study.\n\n\nSETTING\nCommunity sample from 11 counties in Western North Carolina.\n\n\nPARTICIPANTS\nA total of 1420 participants who had being bullied and bullying assessed 4 to 6 times between the ages of 9 and 16 years. Participants were categorized as bullies only, victims only, bullies and victims (hereafter referred to as bullies/victims), or neither.\n\n\nMAIN OUTCOME MEASURE\nPsychiatric outcomes, which included depression, anxiety, antisocial personality disorder, substance use disorders, and suicidality (including recurrent thoughts of death, suicidal ideation, or a suicide attempt), were assessed in young adulthood (19, 21, and 24-26 years) by use of structured diagnostic interviews. RESULTS Victims and bullies/victims had elevated rates of young adult psychiatric disorders, but also elevated rates of childhood psychiatric disorders and family hardships. After controlling for childhood psychiatric problems or family hardships, we found that victims continued to have a higher prevalence of agoraphobia (odds ratio [OR], 4.6 [95% CI, 1.7-12.5]; P < .01), generalized anxiety (OR, 2.7 [95% CI, 1.1-6.3]; P < .001), and panic disorder (OR, 3.1 [95% CI, 1.5-6.5]; P < .01) and that bullies/victims were at increased risk of young adult depression (OR, 4.8 [95% CI, 1.2-19.4]; P < .05), panic disorder (OR, 14.5 [95% CI, 5.7-36.6]; P < .001), agoraphobia (females only; OR, 26.7 [95% CI, 4.3-52.5]; P < .001), and suicidality (males only; OR, 18.5 [95% CI, 6.2-55.1]; P < .001). Bullies were at risk for antisocial personality disorder only (OR, 4.1 [95% CI, 1.1-15.8]; P < .04).\n\n\nCONCLUSIONS AND RELEVANCE\nThe effects of being bullied are direct, pleiotropic, and long-lasting, with the worst effects for those who are both victims and bullies.", "title": "" }, { "docid": "ca58a73d73f4174367cdee6b5269379c", "text": "Data noising is an effective technique for regularizing neural network models. While noising is widely adopted in application domains such as vision and speech, commonly used noising primitives have not been developed for discrete sequencelevel settings such as language modeling. In this paper, we derive a connection between input noising in neural network language models and smoothing in ngram models. Using this connection, we draw upon ideas from smoothing to develop effective noising schemes. We demonstrate performance gains when applying the proposed schemes to language modeling and machine translation. Finally, we provide empirical analysis validating the relationship between noising and smoothing.", "title": "" }, { "docid": "efcf84406a2218deeb4ca33cb8574172", "text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.", "title": "" }, { "docid": "364f9c0272d3b7f32da16e7f66dee2ad", "text": "We develop a general framework for marginbased multicategory classification in metric spaces. The basic work-horse is a marginregularized version of the nearest-neighbor classifier. We prove generalization bounds that match the state of the art in sample size n and significantly improve the dependence on the number of classes k. Our point of departure is a nearly Bayes-optimal finite-sample risk bound independent of k. Although k-free, this bound is unregularized and non-adaptive, which motivates our main result: Rademacher and scale-sensitive margin bounds with a logarithmic dependence on k. As the best previous risk estimates in this setting were of order √ k, our bound is exponentially sharper. From the algorithmic standpoint, in doubling metric spaces our classifier may be trained on n examples in O(n log n) time and evaluated on new points in O(log n) time.", "title": "" }, { "docid": "33e5718ddad39600605530078d3d152e", "text": "This work presents the modeling and control of a tilt-rotor UAV with tail controlled surfaces for path tracking with improved forward flight performance. A nonlinear dynamic model is obtained through Euler-Lagrange formulation and linearized around a reference trajectory in order to obtain a linear parameter-varying model. The forward velocity is treated as an uncertain parameter, and the linearized system is represented as a set of polytopes with nonempty intersection regarding the forward velocity. Feedback gains are computed for each of the vertices of the polytopes using a discrete mixed control approach with pole placement constraints strategy. The resultant feedback gain, which is able to control the system inside a given polytope, is obtained using an adaptive law through an optimal convex combination of the vertices' gains. Finally, an adaptive mixing scheme is used to smoothly schedule the feedback gains between the polytopes.", "title": "" }, { "docid": "e9676faf7e8d03c64fdcf6aa5e09b008", "text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.", "title": "" }, { "docid": "6386c0ef0d7cc5c33e379d9c4c2ca019", "text": "BACKGROUND\nEven after negative sentinel lymph node biopsy (SLNB) for primary melanoma, patients who develop in-transit (IT) melanoma or local recurrences (LR) can have subclinical regional lymph node involvement.\n\n\nSTUDY DESIGN\nA prospective database identified 33 patients with IT melanoma/LR who underwent technetium 99m sulfur colloid lymphoscintigraphy alone (n = 15) or in conjunction with lymphazurin dye (n = 18) administered only if the IT melanoma/LR was concurrently excised.\n\n\nRESULTS\nSeventy-nine percent (26 of 33) of patients undergoing SLNB in this study had earlier removal of lymph nodes in the same lymph node basin as the expected drainage of the IT melanoma or LR at the time of diagnosis of their primary melanoma. Lymphoscintography at time of presentation with IT melanoma/LR was successful in 94% (31 of 33) cases, and at least 1 sentinel lymph node was found intraoperatively in 97% (30 of 31) cases. The SLNB was positive in 33% (10 of 30) of these cases. Completion lymph node dissection was performed in 90% (9 of 10) of patients. Nine patients with negative SLNB and IT melanoma underwent regional chemotherapy. Patients in this study with a positive sentinel lymph node at the time the IT/LR was mapped had a considerably shorter time to development of distant metastatic disease compared with those with negative sentinel lymph nodes.\n\n\nCONCLUSIONS\nIn this study, we demonstrate the technical feasibility and clinical use of repeat SLNB for recurrent melanoma. Performing SLNB cannot only optimize local, regional, and systemic treatment strategies for patients with LR or IT melanoma, but also appears to provide important prognostic information.", "title": "" }, { "docid": "e059d7e04c3dba8ed570ad1d72a647b5", "text": "An electronic throttle is a low-power dc servo drive which positions the throttle plate. Its application in modern automotive engines leads to improvements in vehicle drivability, fuel economy, and emissions. Transmission friction and the return spring limp-home nonlinearity significantly affect the electronic throttle performance. The influence of these effects is analyzed by means of computer simulations, experiments, and analytical calculations. A dynamic friction model is developed in order to adequately capture the experimentally observed characteristics of the presliding-displacement and breakaway effects. The linear part of electronic throttle process model is also analyzed and experimentally identified. A nonlinear control strategy is proposed, consisting of a proportional-integral-derivative (PID) controller and a feedback compensator for friction and limp-home effects. The PID controller parameters are analytically optimized according to the damping optimum criterion. The proposed control strategy is verified by computer simulations and experiments.", "title": "" }, { "docid": "cd8bd76ecebbd939400b4724499f7592", "text": "Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.", "title": "" } ]
scidocsrr
05659004dcc6a36cce64348fce84f14c
Review of automatic sarcasm detection
[ { "docid": "f59d78932d77e81c23e6d0d08b887053", "text": "Automatically detecting verbal irony (roughly, sarcasm) in online content is important for many practical applications (e.g., sentiment detection), but it is difficult. Previous approaches have relied predominantly on signal gleaned from word counts and grammatical cues. But such approaches fail to exploit the context in which comments are embedded. We thus propose a novel strategy for verbal irony classification that exploits contextual features, specifically by combining noun phrases and sentiment extracted from comments with the forum type (e.g., conservative or liberal) to which they were posted. We show that this approach improves verbal irony classification performance. Furthermore, because this method generates a very large feature space (and we expect predictive contextual features to be strong but few), we propose a mixed regularization strategy that places a sparsity-inducing `1 penalty on the contextual feature weights on top of the `2 penalty applied to all model coefficients. This increases model sparsity and reduces the variance of model performance.", "title": "" } ]
[ { "docid": "6bba3dc4f75d403f387f40174d085463", "text": "With the proliferation of wireless devices, wireless networks in various forms have become global information infrastructure and an important part of our daily life, which, at the same time, incur fast escalations of both data volumes and energy demand. In other words, energy-efficient wireless networking is a critical and challenging issue in the big data era. In this paper, we provide a comprehensive survey of recent developments on energy-efficient wireless networking technologies that are effective or promisingly effective in addressing the challenges raised by big data. We categorize existing research into two main parts depending on the roles of big data. The first part focuses on energy-efficient wireless networking techniques in dealing with big data and covers studies in big data acquisition, communication, storage, and computation; while the second part investigates recent approaches based on big data analytics that are promising to enhance energy efficiency of wireless networks. In addition, we identify a number of open issues and discuss future research directions for enhancing energy efficiency of wireless networks in the big data era.", "title": "" }, { "docid": "687caec27d44691a6aac75577b32eb81", "text": "We present unsupervised approaches to the problem of modeling dialog acts in asynchronous conversations; i.e., conversations where participants collaborate with each other at different times. In particular, we investigate a graph-theoretic deterministic framework and two probabilistic conversation models (i.e., HMM and HMM+Mix) for modeling dialog acts in emails and forums. We train and test our conversation models on (a) temporal order and (b) graph-structural order of the datasets. Empirical evaluation suggests (i) the graph-theoretic framework that relies on lexical and structural similarity metrics is not the right model for this task, (ii) conversation models perform better on the graphstructural order than the temporal order of the datasets and (iii) HMM+Mix is a better conversation model than the simple HMM model.", "title": "" }, { "docid": "b3a3dfdc32f9751fabdd6fd06fc598ca", "text": "L-LDA is a new supervised topic model for assigning \"topics\" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions.", "title": "" }, { "docid": "6a64d064220681e83751938ce0190151", "text": "Forensic dentistry can be defined in many ways. One of the more elegant definitions is simply that forensic dentistry represents the overlap between the dental and the legal professions. This two-part series presents the field of forensic dentistry by outlining two of the major aspects of the profession: human identification and bite marks. This first paper examines the use of the human dentition and surrounding structures to enable the identification of found human remains. Conventional and novel techniques are presented.", "title": "" }, { "docid": "7335ac635d7bac9683eadfbbdd79839b", "text": "Being one of the major operating system in smartphone industry, security in Android is paramount importance to end users. Android applications are published through Google Play Store which is an official marketplace for Android. If we have to define the current security policy implemented by Google Play Store for publishing Android applications in one sentence then we can write it as “all are suspect but innocent until proven guilty.” It means an application does not have to go through rigorous security review to be accepted for publication. It is assumed that all the applications are benign which does not mean it will remain so in future. If any application is found doing suspicious activities then the application will be categorized as malicious and it will be removed from the Play Store. Though filtering of malicious applications is performed at Play Store, some malicious applications escape the filtering process. Thus, it becomes necessary to take strong security measures at other levels. Security in Android can be enforced at system and application levels. At system level Android uses sandboxing technique while at application level it uses permission. In this paper, we analyze the permission-based security implemented in Android through three different perspectives – policy expert, developer, and end user.", "title": "" }, { "docid": "cea20aad38c5ca08bc2a07bde39ba2d0", "text": "The existing snow/rain removal methods often fail for heavy snow/rain and dynamic scene. One reason for the failure is due to the assumption that all the snowflakes/rain streaks are sparse in snow/rain scenes. The other is that the existing methods often can not differentiate moving objects and snowflakes/rain streaks. In this paper, we propose a model based on matrix decomposition for video desnowing and deraining to solve the problems mentioned above. We divide snowflakes/rain streaks into two categories: sparse ones and dense ones. With background fluctuations and optical flow information, the detection of moving objects and sparse snowflakes/rain streaks is formulated as a multi-label Markov Random Fields (MRFs). As for dense snowflakes/rain streaks, they are considered to obey Gaussian distribution. The snowflakes/rain streaks, including sparse ones and dense ones, in scene backgrounds are removed by low-rank representation of the backgrounds. Meanwhile, a group sparsity term in our model is designed to filter snow/rain pixels within the moving objects. Experimental results show that our proposed model performs better than the state-of-the-art methods for snow and rain removal.", "title": "" }, { "docid": "4646770e02f6c71f749e92b3b372ee00", "text": "Cochannel speech separation aims to separate two speech signals from a single mixture. In a supervised scenario, the identities of two speakers are given, and current methods use pre-trained speaker models for separation. One issue in model-based methods is the mismatch between training and test signal levels. We propose an iterative algorithm to adapt speaker models to match the signal levels in testing. Our algorithm first obtains initial estimates of source signals using unadapted speaker models and then detects the input signal-to-noise ratio (SNR) of the mixture. The input SNR is then used to adapt the speaker models for more accurate estimation. The two steps iterate until convergence. Compared to search-based SNR detection methods, our method is not limited to given SNR levels. Evaluations demonstrate that the iterative procedure converges quickly in a considerable range of SNRs and improves separation results significantly. Comparisons show that the proposed system performs significantly better than related model-based systems.", "title": "" }, { "docid": "835db6f57216eea24afc0e55935dec88", "text": "Over the past few years, it has been proved that a fast website has a positive influence on the conversion rate of e-commerce websites. However, it is not yet known what other reasons related to performance explain the fact that some users convert and others do not. In this thesis, we analyze the behavior of users in three e-commerce websites, in order to analyze the path of the user in the site, and estimate a series of features that impact the behavior of the user. Two approaches were proposed: Sequence Modeling and Anomaly Detection. The former is an application of Long Short-term Memory (LSTM) in the path of the user as sequential data, aiming to predict whether the user purchases an item on the site. The latter is an alternative use of Autoencoders to detect anomalies, where the buyers are considered anomalies, this approach allows to analyze the influence of other features related to performance in the final decision of the user. We found that the path of the user does not influence the decision of purchasing a product. In contrast, we show that the user becomes more likely to purchase after a certain number of steps on the website. In addition, we are capable of showing the influence of the performance on each e-commerce website for specific page groups. Based on the results a decision-making tool was developed to estimate the possible impact caused by a positive increment of speed.", "title": "" }, { "docid": "88660d823f1c20cf0b75b665c66af696", "text": "A pectus index can be derived from dividing the transverse diameter of the chest by the anterior-posterior diameter on a simple CT scan. In a preliminary report, all patients who required operative correction for pectus excavatum had a pectus index greater than 3.25 while matched normal controls were all less than 3.25. A simple CT scan may be a useful adjunct in objective evaluation of children and teenagers for surgery of pectus excavatum.", "title": "" }, { "docid": "0fb2afcd2997a1647bb4edc12d2191f9", "text": "Many databases have grown to the point where they cannot fit into the fast memory of even large memory machines, to say nothing of current workstations. If what we want to do is to use these data bases to construct predictions of various characteristics, then since the usual methods require that all data be held in fast memory, various work-arounds have to be used. This paper studies one such class of methods which give accuracy comparable to that which could have been obtained if all data could have been held in core and which are computationally fast. The procedure takes small pieces of the data, grows a predictor on each small piece and then pastes these predictors together. A version is given that scales up to terabyte data sets. The methods are also applicable to on-line learning.", "title": "" }, { "docid": "ea304e700faa3d3cae4bff89cf01c397", "text": "Ternary logic is a promising alternative to the conventional binary logic in VLSI design as it provides the advantages of reduced interconnects, higher operating speeds, and smaller chip area. This paper presents a pair of circuits for implementing a ternary half adder using carbon nanotube field-effect transistors. The proposed designs combine both futuristic ternary and conventional binary logic design approach. One of the proposed circuits for ternary to binary decoder simplifies further circuit implementation and provides excellent delay and power advantages in data path circuit such as adder. These circuits have been extensively simulated using HSPICE to obtain power, delay, and power delay product. The circuit performances are compared with alternative designs reported in recent literature. One of the proposed ternary adders has been demonstrated power, power delay product improvement up to 63% and 66% respectively, with lesser transistor count. So, the use of these half adders in complex arithmetic circuits will be advantageous.", "title": "" }, { "docid": "33bbff16549f405aebec8b0400da878c", "text": "Lexicon-Based approaches to Sentiment Analysis (SA) differ from the more common machine-learning based approaches in that the former rely solely on previously generated lexical resources that store polarity information for lexical items, which are then identified in the texts, assigned a polarity tag, and finally weighed, to come up with an overall score for the text. Such SA systems have been proved to perform on par with supervised, statistical systems, with the added benefit of not requiring a training set. However, it remains to be seen whether such lexically-motivated systems can cope equally well with extremely short texts, as generated on social networking sites, such as Twitter. In this paper we perform such an evaluation using Sentitext, a lexicon-based SA tool for Spanish.", "title": "" }, { "docid": "29f1e1c9c1601ba194ddcf18de804101", "text": "In this paper, we introduce Waveprint, a novel method for audio identification. Waveprint uses a combination of computer-vision techniques and large-scale-data-stream processing algorithms to create compact fingerprints of audio data that can be efficiently matched. The resulting system has excellent identification capabilities for small snippets of audio that have been degraded in a variety of manners, including competing noise, poor recording quality, and cell-phone playback. We explicitly measure the tradeoffs between performance, memory usage, and computation through extensive experimentation.", "title": "" }, { "docid": "6e1eee6355865bffd6af4c5c1d4a5d31", "text": "Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (MRL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental results have validated the effectiveness of our approach in modeling worker agents’ minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.1", "title": "" }, { "docid": "f72b7bdb0d30140f790577c24a18cee3", "text": "The paper deals with new directions in research, development and applications of advanced control methods and structures based on the principles of optimality, robustness and intelligence. Present trends in the complex process control design demand an increasing degree of integration of numerical mathematics, control engineering methods, new control structures based of distribution, embedded network control structure and new information and communication technologies. Furthermore, increasing problems with interactions, process non-linearity's, operating constraints, time delays, uncertainties, and significant dead-times consequently lead to the necessity to develop more sophisticated control strategies. Advanced control methods and new distributed embedded control structures represent the most effective tools for realizing high performance of many technological processes. Main ideas covered in this paper are motivated namely by the development of new advanced control engineering methods (predictive, hybrid predictive, optimal, adaptive, robust, fuzzy logic, neural network) and new possibilities of their SW and HW realizations and successful implementation in industry.", "title": "" }, { "docid": "a29b94fb434ec5899ede49ff18561610", "text": "Contrary to the classical (time-triggered) principle that calculates the control signal in a periodic fashion, an event-driven control is computed and updated only when a certain condition is satisfied. This notably enables to save computations in the control task while ensuring equivalent performance. In this paper, we develop and implement such strategies to control a nonlinear and unstable system, that is the inverted pendulum. We are first interested on the stabilization of the pendulum near its inverted position and propose an event-based control approach. This notably demonstrates the efficiency of the event-based scheme even in the case where the system has to be actively actuated to remain upright. We then study the swinging of the pendulum up to the desired position and propose a low-cost control law based on an energy function. The switch between both strategies is also analyzed. A real-time experimentation is realized and shows that a reduction of about 98% and 50% of samples less than the classical scheme is achieved for the swing up and stabilization parts respectively.", "title": "" }, { "docid": "38499d78ab2b66f87e8314d75ff1c72f", "text": "We investigated large-scale systems organization of the whole human brain using functional magnetic resonance imaging (fMRI) data acquired from healthy volunteers in a no-task or 'resting' state. Images were parcellated using a prior anatomical template, yielding regional mean time series for each of 90 regions (major cortical gyri and subcortical nuclei) in each subject. Significant pairwise functional connections, defined by the group mean inter-regional partial correlation matrix, were mostly either local and intrahemispheric or symmetrically interhemispheric. Low-frequency components in the time series subtended stronger inter-regional correlations than high-frequency components. Intrahemispheric connectivity was generally related to anatomical distance by an inverse square law; many symmetrical interhemispheric connections were stronger than predicted by the anatomical distance between bilaterally homologous regions. Strong interhemispheric connectivity was notably absent in data acquired from a single patient, minimally conscious following a brainstem lesion. Multivariate analysis by hierarchical clustering and multidimensional scaling consistently defined six major systems in healthy volunteers-- corresponding approximately to four neocortical lobes, medial temporal lobe and subcortical nuclei- - that could be further decomposed into anatomically and functionally plausible subsystems, e.g. dorsal and ventral divisions of occipital cortex. An undirected graph derived by thresholding the healthy group mean partial correlation matrix demonstrated local clustering or cliquishness of connectivity and short mean path length compatible with prior data on small world characteristics of non-human cortical anatomy. Functional MRI demonstrates a neurophysiological architecture of the normal human brain that is anatomically sensible, strongly symmetrical, disrupted by acute brain injury, subtended predominantly by low frequencies and consistent with a small world network topology.", "title": "" }, { "docid": "06044ef2950f169eba39687cd3e723c1", "text": "Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is neovascularisation, the growth of abnormal new vessels. This paper describes an automated method for the detection of new vessels in retinal images. Two vessel segmentation approaches are applied, using the standard line operator and a novel modified line operator. The latter is designed to reduce false responses to non-vessel edges. Both generated binary vessel maps hold vital information which must be processed separately. This is achieved with a dual classification system. Local morphology features are measured from each binary vessel map to produce two separate feature sets. Independent classification is performed for each feature set using a support vector machine (SVM) classifier. The system then combines these individual classification outcomes to produce a final decision. Sensitivity and specificity results using a dataset of 60 images are 0.862 and 0.944 respectively on a per patch basis and 1.00 and 0.90 respectively on a per image basis.", "title": "" }, { "docid": "b9c40aa4c8ac9d4b6cbfb2411c542998", "text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.", "title": "" }, { "docid": "505e80ac2fe0ee1a34c60279b90d0ca7", "text": "In an effective e-learning game, the learner’s enjoyment acts as a catalyst to encourage his/her learning initiative. Therefore, the availability of a scale that effectively measures the enjoyment offered by e-learning games assist the game designer to understanding the strength and flaw of the game efficiently from the learner’s points of view. E-learning games are aimed at the achievement of learning objectives via the creation of a flow effect. Thus, this study is based on Sweetser’s & Wyeth’s framework to develop a more rigorous scale that assesses user enjoyment of e-learning games. The scale developed in the present study consists of eight dimensions: Immersion, social interaction, challenge, goal clarity, feedback, concentration, control, and knowledge improvement. Four learning games employed in a university’s online learning course ‘‘Introduction to Software Application” were used as the instruments of scale verification. Survey questionnaires were distributed to students taking the course and 166 valid samples were subsequently collected. The results showed that the validity and reliability of the scale, EGameFlow, were satisfactory. Thus, the measurement is an effective tool for evaluating the level of enjoyment provided by elearning games to their users. 2008 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
c07d4a86e4df42f37ddcc115c4eac8f2
NaCl on 8-Bit AVR Microcontrollers
[ { "docid": "7c93ceb1f71e5ac65c2c0d22f8a36afe", "text": "NEON is a vector instruction set included in a large fraction of new ARM-based tablets and smartphones. This paper shows that NEON supports high-security cryptography at surprisingly high speeds; normally data arrives at lower speeds, giving the CPU time to handle tasks other than cryptography. In particular, this paper explains how to use a single 800MHz Cortex A8 core to compute the existing NaCl suite of high-security cryptographic primitives at the following speeds: 5.60 cycles per byte (1.14 Gbps) to encrypt using a shared secret key, 2.30 cycles per byte (2.78 Gbps) to authenticate using a shared secret key, 527102 cycles (1517/second) to compute a shared secret key for a new public key, 650102 cycles (1230/second) to verify a signature, and 368212 cycles (2172/second) to sign a message. These speeds make no use of secret branches and no use of secret memory addresses.", "title": "" } ]
[ { "docid": "630e8f538d566af9375c231dd5195a99", "text": "The investigation of the human microbiome is the most rapidly expanding field in biomedicine. Early studies were undertaken to better understand the role of microbiota in carbohydrate digestion and utilization. These processes include polysaccharide degradation, glycan transport, glycolysis, and short-chain fatty acid production. Recent research has demonstrated that the intricate axis between gut microbiota and the host metabolism is much more complex. Gut microbiota—depending on their composition—have disease-promoting effects but can also possess protective properties. This review focuses on disorders of metabolic syndrome, with special regard to obesity as a prequel to type 2 diabetes, type 2 diabetes itself, and type 1 diabetes. In all these conditions, differences in the composition of the gut microbiota in comparison to healthy people have been reported. Mechanisms of the interaction between microbiota and host that have been characterized thus far include an increase in energy harvest, modulation of free fatty acids—especially butyrate—of bile acids, lipopolysaccharides, gamma-aminobutyric acid (GABA), an impact on toll-like receptors, the endocannabinoid system and “metabolic endotoxinemia” as well as “metabolic infection.” This review will also address the influence of already established therapies for metabolic syndrome and diabetes on the microbiota and the present state of attempts to alter the gut microbiota as a therapeutic strategy.", "title": "" }, { "docid": "6d2667dd550e14d4d46b24d9c8580106", "text": "Deficits in gratification delay are associated with a broad range of public health problems, such as obesity, risky sexual behavior, and substance abuse. However, 6 decades of research on the construct has progressed less quickly than might be hoped, largely because of measurement issues. Although past research has implicated 5 domains of delay behavior, involving food, physical pleasures, social interactions, money, and achievement, no published measure to date has tapped all 5 components of the content domain. Existing measures have been criticized for limitations related to efficiency, reliability, and construct validity. Using an innovative Internet-mediated approach to survey construction, we developed the 35-item 5-factor Delaying Gratification Inventory (DGI). Evidence from 4 studies and a large, diverse sample of respondents (N = 10,741) provided support for the psychometric properties of the measure. Specifically, scores on the DGI demonstrated strong internal consistency and test-retest reliability for the 35-item composite, each of the 5 domains, and a 10-item short form. The 5-factor structure fit the data well and had good measurement invariance across subgroups. Construct validity was supported by correlations with scores on closely related self-control measures, behavioral ratings, Big Five personality trait measures, and measures of adjustment and psychopathology, including those on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. DGI scores also showed incremental validity in accounting for well-being and health-related variables. The present investigation holds implications for improving public health, accelerating future research on gratification delay, and facilitating survey construction research more generally by demonstrating the suitability of an Internet-mediated strategy.", "title": "" }, { "docid": "6cf4297e4c87f8e55d59867ac137e56d", "text": "We present a novel approach to RTE that exploits a structure-oriented sentence representation followed by a similarity function. The structural features are automatically acquired from tree skeletons that are extracted and generalized from dependency trees. Our method makes use of a limited size of training data without any external knowledge bases (e.g. WordNet) or handcrafted inference rules. We have achieved an accuracy of 71.1% on the RTE-3 development set performing a 10-fold cross validation and 66.9% on the RTE-3 test data.", "title": "" }, { "docid": "49ff711b6c91c9ec42e16ce2f3bb435b", "text": "In this letter, a wideband three-section branch-line hybrid with harmonic suppression is designed using a novel transmission line model. The proposed topology is constructed using a coupled line, two series transmission lines, and open-ended stubs. The required design equations are obtained by applying even- and odd-mode analysis. To support these equations, a three-section branch-line hybrid working at 0.9 GHz is fabricated and tested. The physical area of the prototype is reduced by 87.7% of the conventional hybrid and the fractional bandwidth is greater than 52%. In addition, the proposed technique can eliminate second harmonic by a level better than 15 dB.", "title": "" }, { "docid": "96704e139fd4d72cb64b0acbfb887475", "text": "Project Failure is the major problem undergoing nowadays as seen by software project managers. Imprecision of the estimation is the reason for this problem. As software grew in size and importance it also grew in complexity, making it very difficult to accurately predict the cost of software development. This was the dilemma in past years. The greatest pitfall of software industry was the fast changing nature of software development which has made it difficult to develop parametric models that yield high accuracy for software development in all domains. Development of useful models that accurately predict the cost of developing a software product. It is a very important objective of software industry. In this paper, several existing methods for software cost estimation are illustrated and their aspects will be discussed. This paper summarizes several classes of software cost estimation models and techniques. To achieve all these goals we implement the simulators. No single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" }, { "docid": "4d3de2d03431e8f06a5b8b31a784ecaa", "text": "For medical students, virtual patient dialogue systems can provide useful training opportunities without the cost of employing actors to portray standardized patients. This work utilizes wordand character-based convolutional neural networks (CNNs) for question identification in a virtual patient dialogue system, outperforming a strong wordand characterbased logistic regression baseline. While the CNNs perform well given sufficient training data, the best system performance is ultimately achieved by combining CNNs with a hand-crafted pattern matching system that is robust to label sparsity, providing a 10% boost in system accuracy and an error reduction of 47% as compared to the pattern-matching system alone.", "title": "" }, { "docid": "52f912cd5a8def1122d7ce6ba7f47271", "text": "System event logs have been frequently used as a valuable resource in data-driven approaches to enhance system health and stability. A typical procedure in system log analytics is to first parse unstructured logs, and then apply data analysis on the resulting structured data. Previous work on parsing system event logs focused on offline, batch processing of raw log files. But increasingly, applications demand online monitoring and processing. We propose an online streaming method Spell, which utilizes a longest common subsequence based approach, to parse system event logs. We show how to dynamically extract log patterns from incoming logs and how to maintain a set of discovered message types in streaming fashion. Evaluation results on large real system logs demonstrate that even compared with the offline alternatives, Spell shows its superiority in terms of both efficiency and effectiveness.", "title": "" }, { "docid": "320c5bf641fa348cd1c8fb806558fe68", "text": "A CMOS low-dropout regulator (LDO) with 3.3 V output voltage and 100 mA output current for system-on-chip applications is presented. The proposed LDO is independent of off-chip capacitor, thus the board space and external pins are reduced. By utilizing dynamic slew-rate enhancement (SRE) circuit and nested Miller compensation (NMC) on LDO structure, the proposed LDO provides high stability during line and load regulation without off-chip load capacitor. The overshot voltage has been limited within 550 mV and settling time is less than 50 mus when load current reducing from 100 mA to 1 mA. By using 30 nA reference current, the quiescent current is 3.3 muA. The experiment results agree with the simulation results. The proposed design is implemented by CSMC 0.5 mum mixed-signal process.", "title": "" }, { "docid": "a0ca6986d59905cea49ed28fa378c69e", "text": "The epidemic of type 2 diabetes and impaired glucose tolerance is one of the main causes of morbidity and mortality worldwide. In both disorders, tissues such as muscle, fat and liver become less responsive or resistant to insulin. This state is also linked to other common health problems, such as obesity, polycystic ovarian disease, hyperlipidaemia, hypertension and atherosclerosis. The pathophysiology of insulin resistance involves a complex network of signalling pathways, activated by the insulin receptor, which regulates intermediary metabolism and its organization in cells. But recent studies have shown that numerous other hormones and signalling events attenuate insulin action, and are important in type 2 diabetes.", "title": "" }, { "docid": "ca64effff681149682be21b512f0e3c9", "text": "In this paper, a grip-force control of an elastic object is proposed based on a visual slip margin feedback. When an elastic object is pressed and slid slightly on a rigid plate, a partial slip, called \"incipient slip\" occurs on the contact surface. The slip margin between an elastic object and a rigid plate is estimated based on the analytic solution of Hertzian contact model. A 1 DOF gripper consists of a camera and a force sensor is developed. The slip margin can be estimated from the tangential force measured by a force sensor, the deformation of the elastic object and the radius on the contact area both measured by a camera. In the proposed method, the friction coefficient is not explicitly needed. The grip force is controlled by a direct feedback of the estimated slip margin, whose stability is analytically guaranteed. As a result, the slip margin is maintained to a desired value without occurring the gross slip against a disturbance load force to the object.", "title": "" }, { "docid": "4d1f7ca631304e03b720c501d7e9a227", "text": "Due to the open and distributed characteristics of web service, its access control becomes a challenging problem which has not been addressed properly. In this paper, we show how semantic web technologies can be used to build a flexible access control system for web service. We follow the Role-based Access Control model and extend it with credential attributes. The access control model is represented by a semantic ontology, and specific semantic rules are constructed to implement such as dynamic roles assignment, separation of duty constraints and roles hierarchy reasoning, etc. These semantic rules can be verified and executed automatically by the reasoning engine, which can simplify the definition and enhance the interoperability of the access control policies. The basic access control architecture based on the semantic proposal for web service is presented. Finally, a prototype of the system is implemented to validate the proposal.", "title": "" }, { "docid": "0bd30308a11711f1dc71b8ff8ae8e80c", "text": "Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.", "title": "" }, { "docid": "53acdb714d51d9eca25f1e635f781afa", "text": "Research in several areas provides scientific guidance for use of graphical encoding to convey information in an information visualization display. By graphical encoding we mean the use of visual display elements such as icon color, shape, size, or position to convey information about objects represented by the icons. Literature offers inconclusive and often conflicting viewpoints, including the suggestion that the effectiveness of a graphical encoding depends on the type of data represented. Our empirical study suggests that the nature of the users’ perceptual task is more indicative of the effectiveness of a graphical encoding than the type of data represented. 1. Overview of Perceptual Issues In producing a design to visualize search results for a digital library called Envision [12, 13, 19], we found that choosing graphical devices and document attributes to be encoded with each graphical device is a surprisingly difficult task. By graphical devices we mean those visual display elements (e.g., icon color hue, color saturation, flash rate, shape, size, alphanumeric identifiers, position, etc.) used to convey encoded information. Providing access to graphically encoded information requires attention to a range of human cognitive activities, explored by researchers under at least three rubrics: psychophysics of visual search and identification tasks, graphical perception, and graphical language development. Research in these areas provides scientific guidance for design and evaluation of graphical encoding that might otherwise be reduced to opinion and personal taste. Because of space limits, we discuss here only a small portion of the research on graphical encoding that has been conducted. Additional information is in [20]. Ware [29] provides a broader review of perceptual issues pertaining to information visualization. Especially useful for designers are rankings by effectiveness of various graphical devices in communicating different types of data (e.g., nominal, ordinal, or quantitative). Christ [6] provides such rankings in the context of visual search and identification tasks and provides some empirical evidence to support his findings. Mackinlay [17] suggests rankings of graphical devices for conveying nominal, ordinal, and quantitative data in the context of graphical language design, but these rankings have not been empirically validated [personal communication]. Cleveland and McGill [8, 9] have empirically validated their ranking of graphical devices for quantitative data. The rankings suggested by Christ, Mackinlay, and Cleveland and McGill are not the same, while other literature offers more conflicting viewpoints, suggesting the need for further research. 1.1 Visual Search and Identification Tasks Psychophysics is a branch of psychology concerned with the \"relationship between characteristics of physical stimuli and the psychological experience they produce\" [28]. Studies in the psychophysics of visual search and identification tasks have roots in signal detection theory pertaining to air traffic control, process control, and cockpit displays. These studies suggest rankings of graphical devices [6, 7] described later in this paper and point out significant perceptual interactions among graphical devices used in multidimensional displays. Visual search tasks require visual scanning to locate one or more targets [6, 7, 31]. With a scatterplotlike display (sometimes known as a starfield display [1]), users perform a visual search task when they scan the display to determine the presence of one or more symbols meeting some specific criterion and to locate those symbols if present. For identification tasks, users go beyond visual search to report semantic data about symbols of interest, typically by answering true/false questions or by noting facts about encoded data [6, 7]. Measures of display effectiveness for visual search and identification tasks include time, accuracy, and cognitive workload. A more thorough introduction to signal detection theory may be found in Wickens’ book [31]. Issues involved in studies that influenced the Envision design are complex and findings are sometimes contradictory. Following is a representative overview, but many imProceedings of the IEEE Symposium on Information Visualization 2002 (InfoVis’02) 1522-404X/02 $17.00 © 2002 IEEE portant details are necessarily omitted due to space limitations. 1.1.1 Unidimensional Displays. For unidimensional displays — those involving a single graphical code — Christ’s [6, 7] meta-analysis of 42 prior studies suggests the following ranking of graphical devices by effectiveness: color, size, brightness or alphanumeric, and shape. Other studies confirm that color is the most effective graphical device for reducing display search time [7, 14, 25] but find it followed by shape and then letters or digits [7]. Benefits of color-coding increase for high-density displays [15, 16], but using shapes too similar to one another actually increases search time [22]. For identification tasks measuring accuracy with unidimensional displays, Christ’s work [6, 7] suggests the following ranking of graphical devices by effectiveness: alphanumeric, color, brightness, size, and shape. In a later study, Christ found that digits gave the most accurate results but that color, letters, and familiar geometric shapes all produced equal results with experienced subjects [7]. However, Jubis [14] found that shape codes yielded faster mean reaction times than color codes, while Kopala [15] found no significant difference among codes for identification tasks. 1.1.2 Multidimensional Displays. For multidimensional displays — those using multiple graphical devices combined in one visual object to encode several pieces of information — codes may be either redundant or non-redundant. A redundant code using color and shape to encode the same information yields average search speeds even faster than non-redundant color or shape encoding [7]. Used redundantly with other codes, color yields faster results than shape, and either color or shape is superior as a redundant code to both letters and digits [7]. Jubis [14] confirms that a redundant code involving both color and shape is superior to shape coding but is approximately equal to non-redundant color-coding. For difficult tasks, using redundant color-coding may significantly reduce reaction time and increase accuracy [15]. Benefits of redundant color-coding increase as displays become more cluttered or complex [15]. 1.1.3 Interactions Among Graphical Devices . Significant interactions among graphical devices complicate design for multidimensional displays. Color-coding interferes with all achromatic codes, reducing accuracy by as much as 43% [6]. Indeed, Luder [16] suggests that color has such cognitive dominance that it should only be used to encode the most important data and in situations where dependence on color-coding does not increase risk. While we found no supporting empirical evidence, we believe size and shape interact, causing the shape of very small objects to be perceived less accurately. 1.1.4 Ranges of Graphical Devices. The number of instances of each graphical device (e.g., how many colors or shapes are used in the code) is significant because it limits the range or number of values encoded using that device [3]. The conservative recommendation is to use only five or six distinct colors or shapes [3, 7, 27, 31]. However, some research suggests that 10 [3] to 18 [24] colors may be used for search tasks. 1.1.5 Integration vs. Non-integration Tasks. Later research has focused on how humans extract information from a multidimensional display to perform both integration and non-integration tasks [4, 26, 27]. An integration task uses information encoded non-redundantly with two or more graphical devices to reach a single decision or action, while a non-integration task bases decisions or actions on information encoded in only one graphical device. Studies [4, 30] provide evidence that object displays, in which multiple visual attributes of a single object present information about multiple characteristics, facilitate integration tasks, especially where multiple graphical encodings all convey information relevant to the task at hand. However, object displays hinder non-integration tasks, as additional effort is required to filter out unwanted information communicated by the objects. 1.2 Graphical Perception Graphical perception is “the visual decoding of the quantitative and qualitative information encoded on graphs,” where visual decoding means “instantaneous perception of the visual field that comes without apparent mental effort” [9, p. 828]. Cleveland and McGill studied the perception of quantitative data such as “numerical values of a variable...that are not highly discrete...” [9, p. 828]. They have identified and empirically validated a ranking of graphical devices for displaying quantitative data, ordered as follows from most to least accurately perceived [9, p. 830]: Position along a common scale; Position on identical but non-aligned scales; Length; Angle or Slope; Area; Volume, Density, and/or Color saturation; Color hue. 1.3 Graphical Language Development Graphical language development is based on the assertion that graphical devices communicate information equivalent to sentences [17] and thus call for attention to appropriate use of each graphical device. In his discussion of graphical languages, Mackinlay [17] suggests three different rankings of the effectiveness of various graphical devices in communicating quantitative (numerical), ordinal (ranked), and nominal (non-ordinal textual) data about objects. Although based on psychophysical and graphical perception research, Mackinlay's rankings have not been experimentally validated [personal communication]. 1.4 Observations on Prior Research These studies make it clear that no single graphical device works equally well for all users, nor does an", "title": "" }, { "docid": "17253a37e4f26cb6dabf1e1eb4e9a878", "text": "The recent development of Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) techniques has facilitated the exploration of parameter-rich evolutionary models. At the same time, stochastic models have become more realistic (and complex) and have been extended to new types of data, such as morphology. Based on this foundation, we developed a Bayesian MCMC approach to the analysis of combined data sets and explored its utility in inferring relationships among gall wasps based on data from morphology and four genes (nuclear and mitochondrial, ribosomal and protein coding). Examined models range in complexity from those recognizing only a morphological and a molecular partition to those having complex substitution models with independent parameters for each gene. Bayesian MCMC analysis deals efficiently with complex models: convergence occurs faster and more predictably for complex models, mixing is adequate for all parameters even under very complex models, and the parameter update cycle is virtually unaffected by model partitioning across sites. Morphology contributed only 5% of the characters in the data set but nevertheless influenced the combined-data tree, supporting the utility of morphological data in multigene analyses. We used Bayesian criteria (Bayes factors) to show that process heterogeneity across data partitions is a significant model component, although not as important as among-site rate variation. More complex evolutionary models are associated with more topological uncertainty and less conflict between morphology and molecules. Bayes factors sometimes favor simpler models over considerably more parameter-rich models, but the best model overall is also the most complex and Bayes factors do not support exclusion of apparently weak parameters from this model. Thus, Bayes factors appear to be useful for selecting among complex models, but it is still unclear whether their use strikes a reasonable balance between model complexity and error in parameter estimates.", "title": "" }, { "docid": "fb4fcc4d5380c4123b24467c1ca2a8e3", "text": "Deep neural networks are traditionally trained using humandesigned stochastic optimization algorithms, such as SGD and Adam. Recently, the approach of learning to optimize network parameters has emerged as a promising research topic. However, these learned black-box optimizers sometimes do not fully utilize the experience in human-designed optimizers, therefore have limitation in generalization ability. In this paper, a new optimizer, dubbed as HyperAdam, is proposed that combines the idea of “learning to optimize” and traditional Adam optimizer. Given a network for training, its parameter update in each iteration generated by HyperAdam is an adaptive combination of multiple updates generated by Adam with varying decay rates. The combination weights and decay rates in HyperAdam are adaptively learned depending on the task. HyperAdam is modeled as a recurrent neural network with AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for various network training, such as multilayer perceptron, CNN and LSTM.", "title": "" }, { "docid": "cdb7380ca1a4b5a8059e3e4adc6b7ea2", "text": "In this paper, tunable microstrip bandpass filters with two adjustable transmission poles and compensable coupling are proposed. The fundamental structure is based on a half-wavelength (λ/2) resonator with a center-tapped open-stub. Microwave varactors placed at various internal nodes separately adjust the filter's center frequency and bandwidth over a wide tuning range. The constant absolute bandwidth is achieved at different center frequencies by maintaining the distance between the in-band transmission poles. Meanwhile, the coupling strength could be compensable by tuning varactors that are side and embedding loaded in the parallel coupled microstrip lines (PCMLs). As a demonstrator, a second-order filter with seven tuning varactors is implemented and verified. A frequency range of 0.58-0.91 GHz with a 1-dB bandwidth tuning from 115 to 315 MHz (i.e., 12.6%-54.3% fractional bandwidth) is demonstrated. Specifically, the return loss of passbands with different operating center frequencies can be achieved with same level, i.e., about 13.1 and 11.6 dB for narrow and wide passband responses, respectively. To further verify the etch-tolerance characteristics of the proposed prototype filter, another second-order filter with nine tuning varactors is proposed and fabricated. The measured results exhibit that the tunable fitler with the embedded varactor-loaded PCML has less sensitivity to fabrication tolerances. Meanwhile, the passband return loss can be achieved with same level of 20 dB for narrow and wide passband responses, respectively.", "title": "" }, { "docid": "24e943940f1bd1328dba1de2e15d3137", "text": "The use of external databases to generate training data, also known as Distant Supervision, has become an effective way to train supervised relation extractors but this approach inherently suffers from noise. In this paper we propose a method for noise reduction in distantly supervised training data, using a discriminative classifier and semantic similarity between the contexts of the training examples. We describe an active learning strategy which exploits hierarchical clustering of the candidate training samples. To further improve the effectiveness of this approach, we study the use of several methods for dimensionality reduction of the training samples. We find that semantic clustering of training data combined with cluster-based active learning allows filtering the training data, hence facilitating the creation of a clean training set for relation extraction, at a reduced manual labeling cost.", "title": "" }, { "docid": "ebb4c6a7f74ca3cede615542bcb0b11b", "text": "The proposed system of the digitally emulated current mode control for a DC-DC boost converter using the FPGA is implemented by the emulation technique to generate PWM pulse. A reasonable A/D converter with a few MSPS conversion rate is good enough to control the DC-DC converter with 100 kHz switching frequency. It is found the experimental data show the good static and dynamic-response characteristics, which means that the proposed system can be integrated into one chip digital IC for power-source-control with reasonable price.", "title": "" }, { "docid": "01a4b2be52e379db6ace7fa8ed501805", "text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.", "title": "" } ]
scidocsrr
7f94a0e839dbdd0cb698f1f04f9f83c1
Design for 5G Mobile Network Architecture
[ { "docid": "4412bca4e9165545e4179d261828c85c", "text": "Today 3G mobile systems are on the ground providing IP connectivity for real-time and non-real-time services. On the other side, there are many wireless technologies that have proven to be important, with the most important ones being 802.11 Wireless Local Area Networks (WLAN) and 802.16 Wireless Metropolitan Area Networks (WMAN), as well as ad-hoc Wireless Personal Area Network (WPAN) and wireless networks for digital TV and radio broadcast. Then, the concepts of 4G is already much discussed and it is almost certain that 4G will include several standards under a common umbrella, similarly to 3G, but with IEEE 802.xx wireless mobile networks included from the beginning. The main contribution of this paper is definition of 5G (Fifth Generation) mobile network concept, which is seen as user-centric concept instead of operator-centric as in 3G or service-centric concept as seen for 4G. In the proposed concept the mobile user is on the top of all. The 5G terminals will have software defined radios and modulation scheme as well as new error-control schemes can be downloaded from the Internet on the run. The development is seen towards the user terminals as a focus of the 5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. Each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. The paper also proposes intelligent Internet phone concept where the mobile phone can choose the best connections by selected constraints and dynamically change them during a single end-to-end connection. The proposal in this paper is fundamental shift in the mobile networking philosophy compared to existing 3G and near-soon 4G mobile technologies, and this concept is called here the 5G.", "title": "" } ]
[ { "docid": "bda4bdc27e9ea401abb214c3fb7c9813", "text": "Lipedema is a common, but often underdiagnosed masquerading disease of obesity, which almost exclusively affects females. There are many debates regarding the diagnosis as well as the treatment strategies of the disease. The clinical diagnosis is relatively simple, however, knowledge regarding the pathomechanism is less than limited and curative therapy does not exist at all demanding an urgent need for extensive research. According to our hypothesis, lipedema is an estrogen-regulated polygenetic disease, which manifests in parallel with feminine hormonal changes and leads to vasculo- and lymphangiopathy. Inflammation of the peripheral nerves and sympathetic innervation abnormalities of the subcutaneous adipose tissue also involving estrogen may be responsible for neuropathy. Adipocyte hyperproliferation is likely to be a secondary phenomenon maintaining a vicious cycle. Herein, the relevant articles are reviewed from 1913 until now and discussed in context of the most likely mechanisms leading to the disease, which could serve as a starting point for further research.", "title": "" }, { "docid": "a727d28ed4153d9d9744b3e2b5e47251", "text": "Darts is enjoyed both as a pub game and as a professional competitive activity.Yet most players aim for the highest scoring region of the board, regardless of their level of skill. By modelling a dart throw as a two-dimensional Gaussian random variable, we show that this is not always the optimal strategy.We develop a method, using the EM algorithm, for a player to obtain a personalized heat map, where the bright regions correspond to the aiming locations with high (expected) pay-offs. This method does not depend in any way on our Gaussian assumption, and we discuss alternative models as well.", "title": "" }, { "docid": "9a4fc12448d166f3a292bfdf6977745d", "text": "Enabled by the rapid development of virtual reality hardware and software, 360-degree video content has proliferated. From the network perspective, 360-degree video transmission imposes significant challenges because it consumes 4 6χ the bandwidth of a regular video with the same resolution. To address these challenges, in this paper, we propose a motion-prediction-based transmission mechanism that matches network video transmission to viewer needs. Ideally, if viewer motion is perfectly known in advance, we could reduce bandwidth consumption by 80%. Practically, however, to guarantee the quality of viewing experience, we have to address the random nature of viewer motion. Based on our experimental study of viewer motion (comprising 16 video clips and over 150 subjects), we found the viewer motion can be well predicted in 100∼500ms. We propose a machine learning mechanism that predicts not only viewer motion but also prediction deviation itself. The latter is important because it provides valuable input on the amount of redundancy to be transmitted. Based on such predictions, we propose a targeted transmission mechanism that minimizes overall bandwidth consumption while providing probabilistic performance guarantees. Real-data-based evaluations show that the proposed scheme significantly reduces bandwidth consumption while minimizing performance degradation, typically a 45% bandwidth reduction with less than 0.1% failure ratio.", "title": "" }, { "docid": "850e9c1beae0635e629fbb44bda14dc7", "text": "Power law distribution seems to be an important characteristic of web graphs. Several existing web graph models generate power law graphs by adding new vertices and non-uniform edge connectivities to existing graphs. Researchers have conjectured that preferential connectivity and incremental growth are both required for the power law distribution. In this paper, we propose a different web graph model with power law distribution that does not require incremental growth. We also provide a comparison of our model with several others in their ability to predict web graph clustering behavior.", "title": "" }, { "docid": "e7664a3c413f86792b98912a0241a6ac", "text": "Seq2seq learning has produced promising results on summarization. However, in many cases, system summaries still struggle to keep the meaning of the original intact. They may miss out important words or relations that play critical roles in the syntactic structure of source sentences. In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence. The approach naturally combines source dependency structure with the copy mechanism of an abstractive sentence summarizer. Experimental results demonstrate the effectiveness of incorporating source-side syntactic information in the system, and our proposed approach compares favorably to state-of-the-art methods.", "title": "" }, { "docid": "55658c75bcc3a12c1b3f276050f28355", "text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.", "title": "" }, { "docid": "7437f0c8549cb8f73f352f8043a80d19", "text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.", "title": "" }, { "docid": "a871176628b28af28f630c447236a2d9", "text": "More than 70 years ago, the filamentous ascomycete Trichoderma reesei was isolated on the Solomon Islands due to its ability to degrade and thrive on cellulose containing fabrics. This trait that relies on its secreted cellulases is nowadays exploited by several industries. Most prominently in biorefineries which use T. reesei enzymes to saccharify lignocellulose from renewable plant biomass in order to produce biobased fuels and chemicals. In this review we summarize important milestones of the development of T. reesei as the leading production host for biorefinery enzymes, and discuss emerging trends in strain engineering. Trichoderma reesei has very recently also been proposed as a consolidated bioprocessing organism capable of direct conversion of biopolymeric substrates to desired products. We therefore cover this topic by reviewing novel approaches in metabolic engineering of T. reesei.", "title": "" }, { "docid": "101ecfb3d6a20393d147cd2061414369", "text": "In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time. Our approach generates a textured triangle mesh from a signed distance function that it continuously updates as new RGB-D images arrive. We propose to use an octree as the primary data structure which allows us to represent the scene at multiple scales. Furthermore, it allows us to grow the reconstruction volume dynamically. As most space is either free or unknown, we allocate and update only those voxels that are located in a narrow band around the observed surface. In contrast to a regular grid, this approach saves enormous amounts of memory and computation time. The major challenge is to generate and maintain a consistent triangle mesh, as neighboring cells in the octree are more difficult to find and may have different resolutions. To remedy this, we present in this paper a novel algorithm that keeps track of these dependencies, and efficiently updates corresponding parts of the triangle mesh. In our experiments, we demonstrate the real-time capability on a large set of RGB-D sequences. As our approach does not require a GPU, it is well suited for applications on mobile or flying robots with limited computational resources.", "title": "" }, { "docid": "988c161ceae388f5dbcdcc575a9fa465", "text": "This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.", "title": "" }, { "docid": "0c420c064519e15e071660c750c0b7e3", "text": "In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.", "title": "" }, { "docid": "22b1974fa802c9ea224e6b0b6f98cedb", "text": "This paper presents a human-inspired control approach to bipedal robotic walking: utilizing human data and output functions that appear to be intrinsic to human walking in order to formally design controllers that provably result in stable robotic walking. Beginning with human walking data, outputs-or functions of the kinematics-are determined that result in a low-dimensional representation of human locomotion. These same outputs can be considered on a robot, and human-inspired control is used to drive the outputs of the robot to the outputs of the human. The main results of this paper are that, in the case of both under and full actuation, the parameters of this controller can be determined through a human-inspired optimization problem that provides the best fit of the human data while simultaneously provably guaranteeing stable robotic walking for which the initial condition can be computed in closed form. These formal results are demonstrated in simulation by considering two bipedal robots-an underactuated 2-D bipedal robot, AMBER, and fully actuated 3-D bipedal robot, NAO-for which stable robotic walking is automatically obtained using only human data. Moreover, in both cases, these simulated walking gaits are realized experimentally to obtain human-inspired bipedal walking on the actual robots.", "title": "" }, { "docid": "f409eace05cd617355440509da50d685", "text": "Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.", "title": "" }, { "docid": "16ce10ae21b7ef66746937ba6c9bf321", "text": "Recent years, deep learning is increasingly prevalent in the field of Software Engineering (SE). However, many open issues still remain to be investigated. How do researchers integrate deep learning into SE problems? Which SE phases are facilitated by deep learning? Do practitioners benefit from deep learning? The answers help practitioners and researchers develop practical deep learning models for SE tasks. To answer these questions, we conduct a bibliography analysis on 98 research papers in SE that use deep learning techniques. We find that 41 SE tasks in all SE phases have been facilitated by deep learning integrated solutions. In which, 84.7% papers only use standard deep learning models and their variants to solve SE problems. The practicability becomes a concern in utilizing deep learning techniques. How to improve the effectiveness, efficiency, understandability, and testability of deep learning based solutions may attract more SE researchers in the future. Introduction Driven by the success of deep learning in data mining and pattern recognition, recent years have witnessed an increasing trend for industrial practitioners and academic researchers to integrate deep learning into SE tasks [1]-[3]. For typical SE tasks, deep learning helps SE participators extract requirements from natural language text [1], generate source code [2], predict defects in software [3], etc. As an initial statistics of research papers in SE in this study, deep learning has achieved competitive performance against previous algorithms on about 40 SE tasks. There are at least 98 research papers published or accepted in 66 venues, integrating deep learning into SE tasks. Despite the encouraging amount of papers and venues, there exists little overview analysis on deep learning in SE, e.g., the common way to integrate deep learning into SE, the SE phases facilitated by deep learning, the interests of SE practitioners on deep learning, etc. Understanding these questions is important. On the one hand, it helps practitioners and researchers get an overview understanding of deep learning in SE. On the other hand, practitioners and researchers can develop more practical deep learning models according to the analysis. For this purpose, this study conducts a bibliography analysis on research papers in the field of SE that use deep learning techniques. In contrast to literature reviews,", "title": "" }, { "docid": "986279f6f47189a6d069c0336fa4ba94", "text": "Compared to the traditional single-phase-shift control, dual-phase-shift (DPS) control can greatly improve the performance of the isolated bidirectional dual-active-bridge dc-dc converter (IBDC). This letter points out some wrong knowledge about transmission power of IBDC under DPS control in the earlier studies. On this basis, this letter gives the detailed theoretical and experimental analyses of the transmission power of IBDC under DPS control. And the experimental results showed agreement with theoretical analysis.", "title": "" }, { "docid": "19792ab5db07cd1e6cdde79854ba8cb7", "text": "Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.", "title": "" }, { "docid": "220a0be60be41705a95908df8180cf95", "text": "Since the introduction of the first power module by Semikron in 1975, many innovations have been made to improve the thermal, electrical, and mechanical performance of power modules. These innovations in packaging technology focus on the enhancement of the heat dissipation and thermal cycling capability of the modules. Thermal cycles, caused by varying load and environmental operating conditions, induce high mechanical stress in the interconnection layers of the power module due to the different coefficients of thermal expansion (CTE), leading to fatigue and growth of microcracks in the bonding materials. As a result, the lifetime of power modules can be severely limited in practical applications. Furthermore, to reduce the size and weight of converters, the semiconductors are being operated at higher junction temperatures. Higher temperatures are especially of great interest for use of wide-?bandgap materials, such as SiC and GaN, because these materials leverage their material characteristics, particularly at higher temperatures. To satisfy these tightened requirements, on the one hand, conventional power modules, i.e., direct bonded Cu (DBC)-based systems with bond wire contacts, have been further improved. On the other hand, alternative packaging techniques, e.g., chip embedding into printed circuit boards (PCBs) and power module packaging based on the selective laser melting (SLM) technique, have been developed, which might constitute an alternative to conventional power modules in certain applications.", "title": "" }, { "docid": "06f1c7daafcf59a8eb2ddf430d0d7f18", "text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.", "title": "" }, { "docid": "deb3ac73ec2e8587371c6078dc4b2205", "text": "Natural antimicrobials as well as essential oils (EOs) have gained interest to inhibit pathogenic microorganisms and to control food borne diseases. Campylobacter spp. are one of the most common causative agents of gastroenteritis. In this study, cardamom, cumin, and dill weed EOs were evaluated for their antibacterial activities against Campylobacter jejuni and Campylobacter coli by using agar-well diffusion and broth microdilution methods, along with the mechanisms of antimicrobial action. Chemical compositions of EOs were also tested by gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). The results showed that cardamom and dill weed EOs possess greater antimicrobial activity than cumin with larger inhibition zones and lower minimum inhibitory concentrations. The permeability of cell membrane and cell membrane integrity were evaluated by determining relative electric conductivity and release of cell constituents into supernatant at 260 nm, respectively. Moreover, effect of EOs on the cell membrane of Campylobacter spp. was also investigated by measuring extracellular ATP concentration. Increase of relative electric conductivity, extracellular ATP concentration, and cell constituents' release after treatment with EOs demonstrated that tested EOs affected the membrane integrity of Campylobacter spp. The results supported high efficiency of cardamom, cumin, and dill weed EOs to inhibit Campylobacter spp. by impairing the bacterial cell membrane.", "title": "" } ]
scidocsrr
5a1c1103fe0ec99a1fb094ceba3fcba5
BlurMe: inferring and obfuscating user gender based on ratings
[ { "docid": "6b5c3a9f31151ef62f19085195ff5fc5", "text": "We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy.\n Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty--i.e., noise--to computations, trading accuracy for privacy.\n We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise.\n We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides.", "title": "" } ]
[ { "docid": "0d8c5526a5e5e69c644f27e11ecbfd5d", "text": "Multi-view learning can provide self-supervision when different views are available of the same data. The distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we create a unified multi-view sentence representation learning framework, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model, and the training objective is to maximise the agreement specified by the adjacent context information between two views. We show that, after training, the vectors produced from our multi-view training provide improved representations over the single-view training, and the combination of different views gives further representational improvement and demonstrates solid transferability on standard downstream tasks.", "title": "" }, { "docid": "50e081b178a1a308c61aae4a29789816", "text": "The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants of the mesostable enzyme limonene epoxide hydrolase. First, point mutations were selected if they significantly improved the predicted free energy of protein folding. Disulfide bonds were designed using sampling of backbone conformational space, which tripled the number of experimentally stabilizing disulfide bridges. Next, orthogonal in silico screening steps were used to remove chemically unreasonable mutations and mutations that are predicted to increase protein flexibility. The resulting library of 64 variants was experimentally screened, which revealed 21 (pairs of) stabilizing mutations located both in relatively rigid and in flexible areas of the enzyme. Finally, combining 10-12 of these confirmed mutations resulted in multi-site mutants with an increase in apparent melting temperature from 50 to 85°C, enhanced catalytic activity, preserved regioselectivity and a >250-fold longer half-life. The developed Framework for Rapid Enzyme Stabilization by Computational libraries (FRESCO) requires far less screening than conventional directed evolution.", "title": "" }, { "docid": "729fac8328b57376a954f2e7fc10405e", "text": "Generative Adversarial Networks are proved to be efficient on various kinds of image generation tasks. However, it is still a challenge if we want to generate images precisely. Many researchers focus on how to generate images with one attribute. But image generation under multiple attributes is still a tough work. In this paper, we try to generate a variety of face images under multiple constraints using a pipeline process. The Pip-GAN (Pipeline Generative Adversarial Network) we present employs a pipeline network structure which can generate a complex facial image step by step using a neutral face image. We applied our method on two face image databases and demonstrate its ability to generate convincing novel images of unseen identities under multiple conditions previously.", "title": "" }, { "docid": "9a2d79d9df9e596e26f8481697833041", "text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.", "title": "" }, { "docid": "42f3032626b2a002a855476a718a2b1b", "text": "Learning controllers for bipedal robots is a challenging problem, often requiring expert knowledge and extensive tuning of parameters that vary in different situations. Recently, deep reinforcement learning has shown promise at automatically learning controllers for complex systems in simulation. This has been followed by a push towards learning controllers that can be transferred between simulation and hardware, primarily with the use of domain randomization. However, domain randomization can make the problem of finding stable controllers even more challenging, especially for underactuated bipedal robots. In this work, we explore whether policies learned in simulation can be transferred to hardware with the use of high-fidelity simulators and structured controllers. We learn a neural network policy which is a part of a more structured controller. While the neural network is learned in simulation, the rest of the controller stays fixed, and can be tuned by the expert as needed. We show that using this approach can greatly speed up the rate of learning in simulation, as well as enable transfer of policies between simulation and hardware. We present our results on an ATRIAS robot and explore the effect of action spaces and cost functions on the rate of transfer between simulation and hardware. Our results show that structured policies can indeed be learned in simulation and implemented on hardware successfully. This has several advantages, as the structure preserves the intuitive nature of the policy, and the neural network improves the performance of the hand-designed policy. In this way, we propose a way of using neural networks to improve expert designed controllers, while maintaining ease of understanding.", "title": "" }, { "docid": "2258a0ba739557d489a796f050fad3e0", "text": "The term fractional calculus is more than 300 years old. It is a generalization of the ordinary differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and goes back to times when Leibniz, Gauss, and Newton invented this kind of calculation. In a letter to L’Hospital in 1695 Leibniz raised the following question (Miller and Ross, 1993): “Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders?\" The story goes that L’Hospital was somewhat curious about that question and replied by another question to Leibniz. “What if the order will be 1/2?\" Leibniz in a letter dated September 30, 1695 replied: “It will lead to a paradox, from which one day useful consequences will be drawn.\" The question raised by Leibniz for a fractional derivative was an ongoing topic in the last 300 years. Several mathematicians contributed to this subject over the years. People like Liouville, Riemann, and Weyl made major contributions to the theory of fractional calculus. The story of the fractional calculus continued with contributions from Fourier, Abel, Leibniz, Grünwald, and Letnikov. Nowadays, the fractional calculus attracts many scientists and engineers. There are several applications of this mathematical phenomenon in mechanics, physics, chemistry, control theory and so on (Caponetto et al., 2010; Magin, 2006; Monje et al., 2010; Oldham and Spanier, 1974; Oustaloup, 1995; Podlubny, 1999). It is natural that many authors tried to solve the fractional derivatives, fractional integrals and fractional differential equations in Matlab. A few very good and interesting Matlab functions were already submitted to the MathWorks, Inc. Matlab Central File Exchange, where they are freely downloadable for sharing among the users. In this chapter we will use some of them. It is worth mentioning some addition to Matlab toolboxes, which are appropriate for the solution of fractional calculus problems. One of them is a toolbox created by CRONE team (CRONE, 2010) and another one is the Fractional State–Space Toolkit developed by Dominik Sierociuk (Sierociuk, 2005). Last but not least we should also mention a Matlab toolbox created by Dingyü Xue (Xue, 2010), which is based on Matlab object for fractional-order transfer function and some manipulation with this class of the transfer function. Despite that the mentioned toolboxes are mainly for control systems, they can be “abused\" for solutions of general problems related to fractional calculus as well. 10", "title": "" }, { "docid": "d767a741ee5794a71de1afb84169f1b8", "text": "The advent of Machine Learning as a Service (MLaaS) makes it possible to outsource a visual object recognition task to an external (e.g. cloud) provider. However, outsourcing such an image classification task raises privacy concerns, both from the image provider’s perspective, who wishes to keep their images confidential, and from the classification algorithm provider’s perspective, who wishes to protect the intellectual property of their classifier. We propose PICS, a private image classification system, based on polynomial kernel support vector machine (SVM) learning. We selected SVM because it allows us to apply only low-degree functions for the classification on private data, which is the reason why our solution remains computationally efficient. Our solution is based on Secure Multiparty Computation (MPC), it does not leak any information about the images to be classified, nor about the classifier parameters, and it is provably secure. We demonstrate the practicality of our approach by conducting experiments on realistic datasets. We show that our approach achieves high accuracy, comparable to that achieved on non-privacy-protected data while the input-dependent phase is at least 100 times faster than the similar approach with Fully Homomorphic Encryption.", "title": "" }, { "docid": "4ec266df91a40330b704c4e10eacb820", "text": "Recently many cases of missing children between ages 14 and 17 years are reported. Parents always worry about the possibility of kidnapping of their children. This paper proposes an Android based solution to aid parents to track their children in real time. Nowadays, most mobile phones are equipped with location services capabilities allowing us to get the device’s geographic position in real time. The proposed solution takes the advantage of the location services provided by mobile phone since most of kids carry mobile phones. The mobile application use the GPS and SMS services found in Android mobile phones. It allows the parent to get their child’s location on a real time map. The system consists of two sides, child side and parent side. A parent’s device main duty is to send a request location SMS to the child’s device to get the location of the child. On the other hand, the child’s device main responsibility is to reply the GPS position to the parent’s device upon request. Keywords—Child Tracking System, Global Positioning System (GPS), SMS-based Mobile Application.", "title": "" }, { "docid": "064aba7f2bd824408bd94167da5d7b3a", "text": "Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.", "title": "" }, { "docid": "60c42e3d0d0e82200a80b469a61f1921", "text": "BACKGROUND\nDespite using sterile technique for catheter insertion, closed drainage systems, and structured daily care plans, catheter-associated urinary tract infections (CAUTIs) regularly occur in acute care hospitals. We believe that meaningful reduction in CAUTI rates can only be achieved by reducing urinary catheter use.\n\n\nMETHODS\nWe used an interventional study of a hospital-wide, multidisciplinary program to reduce urinary catheter use and CAUTIs on all patient care units in a 300-bed, community teaching hospital in Connecticut. Our primary focus was the implementation of a nurse-directed urinary catheter removal protocol. This protocol was linked to the physician's catheter insertion order. Three additional elements included physician documentation of catheter insertion criteria, a device-specific charting module added to physician electronic progress notes, and biweekly unit-specific feedback on catheter use rates and CAUTI rates in a multidisciplinary forum.\n\n\nRESULTS\nWe achieved a 50% hospital-wide reduction in catheter use and a 70% reduction in CAUTIs over a 36-month period, although there was wide variation from unit to unit in catheter reduction efforts, ranging from 4% (maternity) to 74% (telemetry).\n\n\nCONCLUSION\nUrinary catheter use, and ultimately CAUTI rates, can be effectively reduced by the diligent application of relatively few evidence-based interventions. Aggressive implementation of the nurse-directed catheter removal protocol was associated with lower catheter use rates and reduced infection rates.", "title": "" }, { "docid": "3a0d2784b1115e82a4aedad074da8c74", "text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "93801b742fd2b99b2416b9ab5eb069e7", "text": "Importance-Performance Analysis (IPA) constitutes an indirect approximation to user's satisfaction measurement that allows to represent, in an easy and functional way, the main points and improvement areas of a specific product or service. Beginning from the importance and judgements concerning the performance that users grant to each prominent attributes of a service, it is possible to obtain a graphic divided into four quadrants in which recommendations for the organization economic resources management are included. Nevertheless, this tool has raised controversies since its origins, referred fundamentally to the placement of the axes that define the quadrants and the conception and measurement of the importance of attributes that compose the service. The primary goal of this article is to propose an alternative to the IPA representation that allows to overcome the limitations and contradictions derived from the original technique, without rejecting the classical graph. The analysis applies to data obtained in a survey about satisfaction with primary health care services of Galicia. Results will permit to advise to primary health care managers with a view toward the planning of future strategic actions.", "title": "" }, { "docid": "46f6001ef4cd4fa02c9edef7ad316094", "text": "5G will provide broadband access everywhere, entertain higher user mobility, and enable connectivity of massive number of devices (e.g. Internet of Things (IoT)) in an ultrareliable and affordable way. The main technological enablers such as cloud computing, Software Defined Networking (SDN) and Network Function Virtualization (NFV) are maturing towards their use in 5G. However, there are pressing security challenges in these technologies besides the growing concerns for user privacy. In this paper, we provide an overview of the security challenges in these technologies and the issues of privacy in 5G. Furthermore, we present security solutions to these challenges and future directions for secure 5G systems.", "title": "" }, { "docid": "ee2f9d185e7e6b47a79fa8ef3ba227c9", "text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was mostly ignored in the literature. It plays different roles for different pedestrians in a crowded scene and can change over time. In this paper, a novel model is proposed to model pedestrian behaviors by incorporating stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality attribute classification, and abnormal event detection. To evaluate our model, two large pedestrian walking route datasets are built. The walking routes of around 15 000 pedestrians from two crowd surveillance videos are manually annotated. The datasets will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.", "title": "" }, { "docid": "df4477952bc78f9ddca6a637b0d9b990", "text": "Food preference learning is an important component of wellness applications and restaurant recommender systems as it provides personalized information for effective food targeting and suggestions. However, existing systems require some form of food journaling to create a historical record of an individual's meal selections. In addition, current interfaces for food or restaurant preference elicitation rely extensively on text-based descriptions and rating methods, which can impose high cognitive load, thereby hampering wide adoption.\n In this paper, we propose PlateClick, a novel system that bootstraps food preference using a simple, visual quiz-based user interface. We leverage a pairwise comparison approach with only visual content. Using over 10,028 recipes collected from Yummly, we design a deep convolutional neural network (CNN) to learn the similarity distance metric between food images. Our model is shown to outperform state-of-the-art CNN by 4 times in terms of mean Average Precision. We explore a novel online learning framework that is suitable for learning users' preferences across a large scale dataset based on a small number of interactions (≤ 15). Our online learning approach balances exploitation-exploration and takes advantage of food similarities using preference-propagation in locally connected graphs.\n We evaluated our system in a field study of 227 anonymous users. The results demonstrate that our method outperforms other baselines by a significant margin, and the learning process can be completed in less than one minute. In summary, PlateClick provides a light-weight, immersive user experience for efficient food preference elicitation.", "title": "" }, { "docid": "83728a9b746c7d3c3ea1e89ef01f9020", "text": "This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.", "title": "" }, { "docid": "b6c9844bdad60c5373cac2bcd018d899", "text": "Cloud computing is currently gaining enormous momentum due to a number of promised benefits: ease of use in terms of deployment, administration, and maintenance, along with high scalability and flexibility to create new services. However, as more personal and business applications migrate to the cloud, service quality will become an important differentiator between providers. In particular, quality of experience as perceived by users has the potential to become the guiding paradigm for managing quality in the cloud. In this article, we discuss technical challenges emerging from shifting services to the cloud, as well as how this shift impacts QoE and QoE management. Thereby, a particular focus is on multimedia cloud applications. Together with a novel QoE-based classification scheme of cloud applications, these challenges drive the research agenda on QoE management for cloud applications.", "title": "" }, { "docid": "8b8ec88419baa23e29d2ec336e8805c6", "text": "Short-term passenger demand forecasting is of great importance to the ondemand ride service platform, which can incentivize vacant cars moving from over-supply regions to over-demand regions. The spatial dependences, temporal dependences, and exogenous dependences need to be considered simultaneously, however, which makes short-term passenger demand forecasting challenging. We propose a novel deep learning (DL) approach, named the fusion convolutional long short-term memory network (FCL-Net), to address these three dependences within one end-to-end learning architecture. The model is stacked and fused by multiple convolutional long short-term memory (LSTM) layers, standard LSTM layers, and convolutional layers. The fusion of convolutional techniques and the LSTM network enables the proposed DL approach to better capture the spatiotemporal characteristics and correlations of explanatory variables. A tailored spatially aggregated random forest is employed to rank the importance of the explanatory variables. The ranking is then used for feature selection. The proposed DL approach is applied to the short-term forecasting of passenger demand under an on-demand ride service platform in Hangzhou, China. Experimental results, validated on real-world data provided by DiDi Chuxing, show that the FCL-Net achieves better predictive performance than traditional approaches in∗Corresponding author Email address: chenxiqun@zju.edu.cn (Xiqun (Michael) Chen) Preprint submitted to Elsevier June 21, 2017 ar X iv :1 70 6. 06 27 9v 1 [ cs .A I] 2 0 Ju n 20 17 cluding both classical time-series prediction models and neural network based algorithms (e.g., artificial neural network and LSTM). Furthermore, the consideration of exogenous variables in addition to passenger demand itself, such as the travel time rate, time-of-day, day-of-week, and weather conditions, is proven to be promising, since it reduces the root mean squared error (RMSE) by 50.9%. It is also interesting to find that the feature selection reduces 30% in the dimension of predictors and leads to only 0.6% loss in the forecasting accuracy measured by RMSE in the proposed model. This paper is one of the first DL studies to forecast the short-term passenger demand of an on-demand ride service platform by examining the spatio-temporal correlations.", "title": "" }, { "docid": "c2a2e9903859a6a9f9b3db5696cb37ff", "text": "Depth estimation from a single image is a fundamental problem in computer vision. In this paper, we propose a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction. Specifically, we adopt an efficient linear propagation model, where the propagation is performed with a manner of recurrent convolutional operation, and the affinity among neighboring pixels is learned through a deep convolutional neural network (CNN). We apply the designed CSPN to two depth estimation tasks given a single image: (1) Refine the depth output from existing state-of-the-art (SOTA) methods; (2) Convert sparse depth samples to a dense depth map by embedding the depth samples within the propagation procedure. The second task is inspired by the availability of LiDAR that provides sparse but accurate depth measurements. We experimented the proposed CSPN over the popular NYU v2 [1] and KITTI [2] datasets, where we show that our proposed approach improves not only quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5× faster) of depth maps than previous SOTA methods.", "title": "" }, { "docid": "d8ec0c507217500a97c1664c33b2fe72", "text": "To realize ideal force control of robots that interact with a human, a very precise actuating system with zero impedance is desired. For such applications, a rotary series elastic actuator (RSEA) has been introduced recently. This paper presents the design of RSEA and the associated control algorithms. To generate joint torque as desired, a torsional spring is installed between a motor and a human joint, and the motor is controlled to produce a proper spring deflection for torque generation. When the desired torque is zero, the motor must follow the human joint motion, which requires that the friction and the inertia of the motor be compensated. The human joint and the body part impose the load on the RSEA. They interact with uncertain environments and their physical properties vary with time. In this paper, the disturbance observer (DOB) method is applied to make the RSEA precisely generate the desired torque under such time-varying conditions. Based on the nominal model preserved by the DOB, feedback and feedforward controllers are optimally designed for the desired performance, i.e., the RSEA: (1) exhibits very low impedance and (2) generates the desired torque precisely while interacting with a human. The effectiveness of the proposed design is verified by experiments.", "title": "" } ]
scidocsrr
dabfe46f674a02cf85a3aada685a722f
Full Attitude Control of a VTOL tailsitter UAV
[ { "docid": "5a2be4e590d31b0cb553215f11776a15", "text": "This paper presents a review of the state of the art and a discussion on vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) applied to the inspection of power utility assets and other similar civil applications. The first part of the paper presents the authors' view on specific benefits and operation constraints associated with the use of UAVs in power industry applications. The second part cites more than 70 recent publications related to this field of application. Among them, some present complete technologies while others deal with specific subsystems relevant to the application of such mobile platforms to power line inspection. The authors close with a discussion of key factors for successful application of VTOL UAVs to power industry infrastructure inspection.", "title": "" }, { "docid": "9d5ca4c756b63c60f6a9d6308df63ea3", "text": "This paper presents recent advances in the project: development of a convertible unmanned aerial vehicle (UAV). This aircraft is able to change its flight configuration from hover to level flight and vice versa by means of a transition maneuver, while maintaining the aircraft in flight. For this purpose a nonlinear control strategy based on Lyapunov design is given. Numerical results are presented showing the effectiveness of the proposed approach.", "title": "" } ]
[ { "docid": "9eb683a1fe85db884e7615222105640d", "text": "OBJECTIVE\nTo evaluate the effect of circumcision on the glans penis sensitivity by comparing the changes of the glans penis vibrotactile threshold between normal men and patients with simple redundant prepuce and among the patients before and after the operation.\n\n\nMETHODS\nThe vibrotactile thresholds were measured at the forefinger and glans penis in 73 normal volunteer controls and 96 patients with simple redundant prepuce before and after circumcision by biological vibration measurement instrument, and the changes in the perception sensitivity of the body surface were analyzed.\n\n\nRESULTS\nThe G/F (glans/finger) indexes in the control and the test group were respectively 2.39 +/- 1.72 and 1.97 +/- 0.71, with no significant difference in between (P > 0.05). And those of the test group were 1.97 +/- 0.71, 2.64 +/- 1.38, 3.09 +/-1.46 and 2.97 +/- 1.20 respectively before and 1, 2 and 3 months after circumcision, with significant difference between pre- and post-operation (P < 0.05).\n\n\nCONCLUSION\nThere is a statistic difference in the glans penis vibration perception threshold between normal men and patients with simple redundant prepuce. The glans penis perception sensitivity decreases after circumcision.", "title": "" }, { "docid": "e2b74db574db8001dace37cbecb8c4eb", "text": "Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get/put operations.", "title": "" }, { "docid": "6331c1d288e8689ecc8b183294676b10", "text": "histories. However, many potential combinations of life-history traits do not actually occur in nature [1,2]. Indeed, the few major axes of life-history variation stand in stark contrast to the variety of selective pressures on life histories: physical conditions, seasonality and unpredictability of the environment, food availability, predators and disease organisms, and relationships within social and family groups. Most life-history thinking has been concerned with constrained evolutionary responses to the environment. Differences among the life histories of species are viewed commonly as having a genetic basis and reflecting the optimization of phenotypes with respect to their environments. The optimal balance between parental investment and adult self-maintenance is also influenced by the life table of the population, particularly the relative value of present and future reproduction [1,3–5]. Constraints on adaptive responses are established by the allocation of limited time, energy and nutrients among competing functions [6,7]. Relatively less attention has been paid to nongenetic responses to the environment, such as adjustment of parental investment in response to perceived risk, except for the study of phenotypic flexibility (the reaction norm) as a life-history character itself [4,8,9]. Here we argue that physiological function, including endocrine control mechanisms, mediates the relationship of the organism to its environment and therefore is essential to our understanding of the diversification of life histories. Much of the variation in life histories, particularly variation in parental investment and self-maintenance, reflects phenotypic responses of individuals to environmental stresses and perceived risks. As a result, the organization of behavioral and physiological control mechanisms might constrain individual (and evolutionary) responses and limit life-history variation among species.", "title": "" }, { "docid": "4fe3f01fef636f8f5cb3c7655a619390", "text": "This paper replicates the results of Dai, Olah, and Le’s paper ”Document Embedding with Paragraph Vectors” and compares the performance of three unsupervised document modeling algorithms [1]. We built and compared the results of Paragraph Vector, Latent Dirichlet Allocation, and traditional Word2Vec models on Wikipedia browsing. We then built three extensions to the original Paragraph Vector model, finding that combinations of paragraph structures assist in optimizing Paragraph Vector training.", "title": "" }, { "docid": "63efc8aecf9b28b2a2bbe4514ed3a7fe", "text": "Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the statistics and chemometrics for analytical chemistry book. You can open the device and get the book by on-line.", "title": "" }, { "docid": "2a22cf643ef4885f51ee588b051373fe", "text": "Telecommunication companies with expensive networks may become the biggest beneficiaries of SDN; however, in contrast to traditional routers, the development of SDN controllers is driven by open-source projects with involvement of the industry. Two prevalent projects in SDN development are the OpenDaylight and the ONOS controllers. These SDN controllers are advanced in their development - having gone through a number of releases - and have been described as being useful for a large number of use-cases. In this work, we compare and evaluate these controllers, in particular their northbound interfaces, by configuring them for a representative use-case, port-mirroring.", "title": "" }, { "docid": "5df3346cb96403ee932428d159ad342e", "text": "Nearly 40% of mortality in the United States is linked to social and behavioral factors such as smoking, diet and sedentary lifestyle. Autonomous self-regulation of health-related behaviors is thus an important aspect of human behavior to assess. In 1997, the Behavior Change Consortium (BCC) was formed. Within the BCC, seven health behaviors, 18 theoretical models, five intervention settings and 26 mediating variables were studied across diverse populations. One of the measures included across settings and health behaviors was the Treatment Self-Regulation Questionnaire (TSRQ). The purpose of the present study was to examine the validity of the TSRQ across settings and health behaviors (tobacco, diet and exercise). The TSRQ is composed of subscales assessing different forms of motivation: amotivation, external, introjection, identification and integration. Data were obtained from four different sites and a total of 2731 participants completed the TSRQ. Invariance analyses support the validity of the TSRQ across all four sites and all three health behaviors. Overall, the internal consistency of each subscale was acceptable (most alpha values >0.73). The present study provides further evidence of the validity of the TSRQ and its usefulness as an assessment tool across various settings and for different health behaviors.", "title": "" }, { "docid": "6ff2fb4bb0c221361e973bf355d847ac", "text": "In this paper, three high-gain on-chip antennas are proposed using three different silicon-based technologies in the millimeter-wave/THz frequency range. A modified Vivaldi antenna is first implemented using the micro-fabricated floating process. In the proposed process, the top metal layer is supported by metal vias, separating it from the silicon substrate for low-loss application. The antenna gain of 5.5 dBi is obtained with 78% radiation efficiency. A monopole antenna is subsequently designed using the silicon-benzocyclobutene (Si-BCB) process, with micro-machined backed cavity for high efficiency and wideband application. The simulated gain of this antenna is 6 dBi with 88% radiation efficiency. Our third proposed on-chip antenna is fabricated using commercial 0.18-Hm CMOS technology. The antenna gain and efficiency are improved by the dielectric resonator on the surface of the antenna. The measured gain is 2.7 dBi with radiation efficiency of 43% The three proposed antennas with different silicon compatible processes are therefore suitable for application in the millimeter-wave/THz integrated circuits.", "title": "" }, { "docid": "5d2ccca443dc6b7beafa4cf213b4aa6f", "text": "Topic models based on latent Dirichlet allocation and related methods are used in a range of user-focused tasks including document navigation and trend analysis, but evaluation of the intrinsic quality of the topic model and topics remains an open research area. In this work, we explore the two tasks of automatic evaluation of single topics and automatic evaluation of whole topic models, and provide recommendations on the best strategy for performing the two tasks, in addition to providing an open-source toolkit for topic and topic model evaluation.", "title": "" }, { "docid": "c1490b8d5a7fbe69e83dff4664dc0cce", "text": "The aim of this study is to evaluate the in vitro cytotoxic activity and cellular effects of previously prepared ZnO-NPs on murine cancer cell lines using brown seaweed (Sargassum muticum) aqueous extract. Treated cancer cells with ZnO-NPs for 72 hours demonstrated various levels of cytotoxicity based on calculated IC50 values using MTT assay as follows: 21.7 ± 1.3 μg/mL (4T1), 17.45 ± 1.1 μg/mL (CRL-1451), 11.75 ± 0.8 μg/mL (CT-26), and 5.6 ± 0.55 μg/mL (WEHI-3B), respectively. On the other hand, ZnO-NPs treatments for 72 hours showed no toxicity against normal mouse fibroblast (3T3) cell line. On the other hand, paclitaxel, which imposed an inhibitory effect on WEHI-3B cells with IC50 of 2.25 ± 0.4, 1.17 ± 0.5, and 1.6 ± 0.09 μg/mL after 24, 48, and 72 hours treatment, respectively, was used as positive control. Furthermore, distinct morphological changes were found by utilizing fluorescent dyes; apoptotic population was increased via flowcytometry, while a cell cycle block and stimulation of apoptotic proteins were also observed. Additionally, the present study showed that the caspase activations contributed to ZnO-NPs triggered apoptotic death in WEHI-3 cells. Thus, the nature of biosynthesis and the therapeutic potential of ZnO-NPs could prepare the way for further research on the design of green synthesis therapeutic agents, particularly in nanomedicine, for the treatment of cancer.", "title": "" }, { "docid": "b011b5e9ed5c96a59399603f4200b158", "text": "The word list memory test from the Consortium to establish a registry for Alzheimer's disease (CERAD) neuropsychological battery (Morris et al. 1989) was administered to 230 psychiatric outpatients. Performance of a selected, age-matched psychiatric group and normal controls was compared using an ANCOVA design with education as a covariate. Results indicated that controls performed better than psychiatric patients on most learning and recall indices. The exception to this was the savings index that has been found to be sensitive to the effects of progressive dementias. The current data are compared and integrated with published CERAD data for Alzheimer's disease patients. The CERAD list memory test is recommended as a brief, efficient, and sensitive memory measure that can be used with a range of difficult patients.", "title": "" }, { "docid": "e5bc3910aa8104004815ce92f9971e2b", "text": "The aim of this study is to analyze the relationship between emotional abilities and the influence of this relationship on self reported drivers' risky attitudes. The risky driving attitudes and emotional abilities of 177 future driving instructors were measured. The results demonstrate that risky attitudes correlate negatively with emotional abilities. Regression analysis showed that adaptability and interpersonal abilities explained the differences observed in the global risk attitude index. There were some differences in the specific risk factors. The variability observed in the speed and distraction and fatigue factors could also be explained by interpersonal and adaptability abilities. Nevertheless the tendency to take risks was explained by stress management and also interpersonal components. Emotional abilities have the weakest relation with alcohol and drugs factor, and in this case the variability observed was explained by the adaptability component. The results obtained highlight the importance take off including emotional abilities in prevention programs to reduce risky driving behaviors.", "title": "" }, { "docid": "1d8e414d09fe7809dbf6daf83f90a999", "text": "The SPARQL query language is currently being extended by the World Wide Web Consortium (W3C) with so-called entailment regimes. An entailment regime defines how queries are evaluated under more expressive semantics than SPARQL’s standard simple entailment, which is based on subgraph matching. The queries are very expressive since variables can occur within complex concepts and can also bind to concept or role names. In this paper, we describe a sound and complete algorithm for the OWL Direct Semantics entailment regime. We further propose several novel optimizations such as strategies for determining a good query execution order, query rewriting techniques, and show how specialized OWL reasoning tasks and the concept and role hierarchy can be used to reduce the query execution time. For determining a good execution order, we propose a cost-based model, where the costs are based on information about the instances of concepts and roles that are extracted from a model abstraction built by an OWL reasoner. We present two ordering strategies: a static and a dynamic one. For the dynamic case, we improve the performance by exploiting an individual clustering approach that allows for computing the cost functions based on one individual sample from a cluster. We provide a prototypical implementation and evaluate the efficiency of the proposed optimizations. Our experimental study shows that the static ordering usually outperforms the dynamic one when accurate statistics are available. This changes, however, when the statistics are less accurate, e.g., due to nondeterministic reasoning decisions. For queries that go beyond conjunctive instance queries we observe an improvement of up to three orders of magnitude due to the proposed optimizations.", "title": "" }, { "docid": "4e2dba807f4650b520b0337e74eae0e3", "text": "Gliomas are primary brain tumors arising from glial cells. Gliomas can be classified into different histopathologic grades according to World Health Oraganization (WHO) grading system which represents malignancy. In this paper, we present a method to predict the grades of Gliomas using Radiomics imaging features. MICCAI Brain Tumor Segmentation Challenge (BRATs 2015) training data, its segmentation ground truth and the ground truth labels were used for this work. 45 radiomics features based on histogram, shape and gray-level co-occurrence matrix (GLCM) were extracted from each FLAIR, T1, T1-Contrast, T2 image to quantify the property of Gliomas. Significant features among 180 features were selected through L1-norm regularization (LASSO). Based on LASSO coefficient and selected feature values, we computed a LASSO score and gliomas were classified into low-grade glimoa (LGG) or high-grade glimoa (HGG) through logistic regression. Classification result was validated by a 10-fold cross validation. Our method achieved accuracy of 0.8981, sensitivity of 0.8889, specificity of 0.9074, and area under the curve (AUC) = 0.8870.", "title": "" }, { "docid": "2da54684e59380c915d7d656b9c86572", "text": "Data management applications deployed on IaaS cloud environments must simultaneously strive to minimize cost and provide good performance. Balancing these two goals requires complex decision-making across a number of axes: resource provisioning, query placement, and query scheduling. While previous works have addressed each axis in isolation for specific types of performance goals, this demonstration showcases WiSeDB, a cloud workload management advisor service that uses machine learning techniques to address all dimensions of the problem for customizable performance goals. In our demonstration, attendees will see WiSeDB in action for a variety of workloads and performance goals.", "title": "" }, { "docid": "794f75b59c7bc3f69e21bbc2ac4b470d", "text": "Smart contracts are computer programs that are executed by a network of mutually distrusting agents, without the need of an external trusted authority. Smart contracts handle and transfer assets of considerable value (in the form of crypto-currency like Bitcoin). Hence, it is crucial that their implementation is bug-free. We identify the utility (or expected payoff) of interacting with such smart contracts as the basic and canonical quantitative property for such contracts. We present a framework for such quantitative analysis of smart contracts. Such a formal framework poses new and novel research challenges in programming languages, as it requires modeling of game-theoretic aspects to analyze incentives for deviation from honest behavior and modeling utilities which are not specified as standard temporal properties such as safety and termination. While game-theoretic incentives have been analyzed in the security community, their analysis has been restricted to the very special case of stateless games. However, to analyze smart contracts, stateful analysis is required as it must account for the different program states of the protocol. Our main contributions are as follows: we present (i) a simplified programming language for smart contracts; (ii) an automatic translation of the programs to state-based games; (iii) an abstractionrefinement approach to solve such games; and (iv) experimental results on real-world-inspired smart contracts.", "title": "" }, { "docid": "f8d50c7fe96fdf8fbe06332ab7e1a2a6", "text": "There is a strong need for advanced control methods in battery management systems, especially in the plug-in hybrid and electric vehicles sector, due to cost and safety issues of new high-power battery packs and high-energy cell design. Limitations in computational speed and available memory require the use of very simple battery models and basic control algorithms, which in turn result in suboptimal utilization of the battery. This work investigates the possible use of optimal control strategies for charging. We focus on the minimum time charging problem, where different constraints on internal battery states are considered. Based on features of the open-loop optimal charging solution, we propose a simple one-step predictive controller, which is shown to recover the time-optimal solution, while being feasible for real-time computations. We present simulation results suggesting a decrease in charging time by 50% compared to the conventional constant-current / constant-voltage method for lithium-ion batteries.", "title": "" }, { "docid": "000652922defcc1d500a604d43c8f77b", "text": "The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.", "title": "" }, { "docid": "09be2c69afdd2f1cfd6f1d8c1583a0ac", "text": "We present a real-time visual-based road following method for mobile robots in outdoor environments. The approach combines an image processing method, that allows to retrieve illumination invariant images, with an efficient path following algorithm. The method allows a mobile robot to autonomously navigate along pathways of different types in adverse lighting conditions using monocular vision. To validate the proposed method, we have evaluated its ability to correctly determine boundaries of pathways in a challenging outdoor dataset. Moreover, the method's performance was tested on a mobile robotic platform that autonomously navigated long paths in urban parks. The experiments demonstrated that the mobile robot was able to identify outdoor pathways of different types and navigate through them despite the presence of shadows that significantly influenced the paths' appearance.", "title": "" } ]
scidocsrr
636286e76930457bfb06bc85bf2861c9
Application of Ontologies in Cloud Computing: The State-Of-The-Art
[ { "docid": "cd108d7b5487cbcf5226b531906364a7", "text": "There has been a great deal of hype about Amazon's simple storage service (S3). S3 provides infinite scalability and high availability at low cost. Currently, S3 is used mostly to store multi-media documents (videos, photos, audio) which are shared by a community of people and rarely updated. The purpose of this paper is to demonstrate the opportunities and limitations of using S3 as a storage system for general-purpose database applications which involve small objects and frequent updates. Read, write, and commit protocols are presented. Furthermore, the cost ($), performance, and consistency properties of such a storage system are studied.", "title": "" } ]
[ { "docid": "a280c56578d96797b1b7dc2e934b0c3e", "text": "The Perspective-n-Point (PnP) problem seeks to estimate the pose of a calibrated camera from n 3D-to-2D point correspondences. There are situations, though, where PnP solutions are prone to fail because feature point correspondences cannot be reliably estimated (e.g. scenes with repetitive patterns or with low texture). In such scenarios, one can still exploit alternative geometric entities, such as lines, yielding the so-called Perspective-n-Line (PnL) algorithms. Unfortunately, existing PnL solutions are not as accurate and efficient as their point-based counterparts. In this paper we propose a novel approach to introduce 3D-to-2D line correspondences into a PnP formulation, allowing to simultaneously process points and lines. For this purpose we introduce an algebraic line error that can be formulated as linear constraints on the line endpoints, even when these are not directly observable. These constraints can then be naturally integrated within the linear formulations of two state-of-the-art point-based algorithms, the OPnP [45] and the EPnP [24], allowing them to indistinctly handle points, lines, or a combination of them. Exhaustive experiments show that the proposed formulation brings remarkable boost in performance compared to only point or only line based solutions, with a negligible computational overhead compared to the original OPnP and EPnP.", "title": "" }, { "docid": "8fa6defe08908c6ee6527d2e3a322a12", "text": "A new wide-band high-efficiency coplanar waveguide-fed printed loop antenna is presented for wireless communication systems in this paper. By adjusting geometrical parameters, the proposed antenna can easily achieve a wide bandwidth. To optimize the antenna performances, a parametric study was conducted with the aid of a commercial software, and based on the optimized geometry, a prototype was designed, fabricated, and tested. The simulated and measured results confirmed that the proposed antenna can operate at (1.68-2.68 GHz) band and at (1.46-2.6 GHz) band with bandwidth of 1 and 1.14 GHz, respectively. Moreover, the antenna has a nearly omnidirectional radiation pattern with a reasonable gain and high efficiency. Due to the above characteristics, the proposed antenna is very suitable for applications in PCS and IMT2000 systems.", "title": "" }, { "docid": "7ecba9c479a754ad55664bf8208643e0", "text": "One of the important problems that our society facing is people with disabilities which are finding hard to cope up with the fast growing technology. About nine billion people in the world are deaf and dumb. Communications between deaf-dumb and a normal person have always been a challenging task. Generally deaf and dumb people use sign language for communication, Sign language is an expressive and natural way for communication between normal and dumb people. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity, the artificial mouth is introduced for the dumb people. So, we need a translator to understand what they speak and communicate with us. Hence makes the communication between normal person and disabled people easier. This work aims to lower the barrier of disabled persons in communication. The main aim of this proposed work is to develop a cost effective system which can give voice to voiceless people with the help of Sign language. In the proposed work, the captured images are processed through MATLAB in PC and converted into speech through speaker and text in LCD by interfacing with Arduino. Keyword : Disabled people, Sign language, Image Processing, Arduino, LCD display, Speaker.", "title": "" }, { "docid": "c9b41427437424ebeca7031d814ec11a", "text": "Many archaeological patterns are fractal. Fractal analysis, therefore, has much to contribute to archaeology. This article offers an introduction to fractal analysis for archaeologists. We explain what fractals are, describe the essential methods of fractal analysis, and present archaeological examples. Some examples have been published previously, while others are presented here for the first time. We also explain the connection between fractal geometry and nonlinear dynamical systems. Fractals are the geometry of complex nonlinear systems. Therefore, fractal analysis is an indispensable method in our efforts to understand nonlinearities in past cultural dynamics.", "title": "" }, { "docid": "63a75f3eedb1410527eb0645ed9bf79d", "text": "Stiffness following surgery or injury to a joint develops as a progression of four stages: bleeding, edema, granulation tissue, and fibrosis. Continuous passive motion (CPM) properly applied during the first two stages of stiffness acts to pump blood and edema fluid away from the joint and periarticular tissues. This allows maintenance of normal periarticular soft tissue compliance. CPM is thus effective in preventing the development of stiffness if full motion is applied immediately following surgery and continued until swelling that limits the full motion of the joint no longer develops. This concept has been applied successfully to elbow rehabilitation, and explains the controversy surrounding CPM following knee arthroplasty. The application of this concept to clinical practice requires a paradigm shift, resulting in our attention being focused on preventing the initial or delayed accumulation of periarticular interstitial fluids.", "title": "" }, { "docid": "741a897b87cc76d68f5400974eee6b32", "text": "Numerous techniques exist to augment the security functionality of Commercial O -The-Shelf (COTS) applications and operating systems, making them more suitable for use in mission-critical systems. Although individually useful, as a group these techniques present di culties to system developers because they are not based on a common framework which might simplify integration and promote portability and reuse. This paper presents techniques for developing Generic Software Wrappers { protected, non-bypassable kernel-resident software extensions for augmenting security without modi cation of COTS source. We describe the key elements of our work: our high-level Wrapper De nition Language (WDL), and our framework for con guring, activating, and managing wrappers. We also discuss code reuse, automatic management of extensions, a framework for system-building through composition, platform-independence, and our experiences with our Solaris and FreeBSD prototypes.", "title": "" }, { "docid": "c5fe7b74e3949650a1cc3925723b5434", "text": "In recent years the telecommunications backbone has experienced substantial growth; however, little has changed in the access network. The tremendous growth of Internet traffic has accentuated the aggravating lag of access network capacity. The “last mile” still remains the bottleneck between high-capacity Local Area Networks (LANs) and the backbone network. The most widely deployed “broadband” solutions today are Digital Subscriber Line (DSL) and cable modem (CM) networks. Although they are an improvement compared to 56 Kbps dial-up lines, they are unable to provide enough bandwidth for emerging services such as Video-On-Demand (VoD), interactive gaming or two-way video conferencing. A new technology is required; one that is inexpensive, simple, scalable, and capable of delivering bundled voice, data and video services to an end-user over a single network. Ethernet Passive Optical Networks (EPONs), which represent the convergence of low-cost Ethernet equipment and low-cost fiber infrastructure, appear to be the best candidate for the next-generation access network.", "title": "" }, { "docid": "691cdea5cf3fae2713c721c1cfa8c132", "text": "of the Dissertation Addressing the Challenges of Underspecification in Web Search", "title": "" }, { "docid": "8c95392ab3cc23a7aa4f621f474d27ba", "text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.", "title": "" }, { "docid": "9096a4dac61f8a87da4f5cbfca5899a8", "text": "OBJECTIVE\nTo evaluate the CT findings of ruptured corpus luteal cysts.\n\n\nMATERIALS AND METHODS\nSix patients with a surgically proven ruptured corpus luteal cyst were included in this series. The prospective CT findings were retrospectively analyzed in terms of the size and shape of the cyst, the thickness and enhancement pattern of its wall, the attenuation of its contents, and peritoneal fluid.\n\n\nRESULTS\nThe mean diameter of the cysts was 2.8 (range, 1.5-4.8) cm; three were round and three were oval. The mean thickness of the cyst wall was 4.7 (range, 1-10) mm; in all six cases it showed strong enhancement, and in three was discontinuous. In five of six cases, the cystic contents showed high attenuation. Peritoneal fluid was present in all cases, and its attenuation was higher, especially around the uterus and adnexa, than that of urine present in the bladder.\n\n\nCONCLUSION\nIn a woman in whom CT reveals the presence of an ovarian cyst with an enhancing rim and highly attenuated contents, as well as highly attenuated peritoneal fluid, a ruptured corpus luteal cyst should be suspected. Other possible evidence of this is focal interruption of the cyst wall and the presence of peritoneal fluid around the adnexa.", "title": "" }, { "docid": "1692fd25fba1145eb07b004ed07cab4e", "text": "Concepts of cognitive radio are yet in an early stage of development. They aim at improving the efficiency of spectrum utilization by exploiting locally and temporally vacant parts of the spectrum. When hierarchical spectrum access is considered, secondary users are authorized to use spectrum white spaces on a non-interfering basis, where minimal impact on the primary systems has to be ensured. GFDM is a digital multi-carrier transceiver concept that employs pulse shaping filters to provide control over the transmitted signal's spectral properties, a cyclic prefix that enables an efficient FFT-based frequency domain equalization scheme as well as tail biting as a way to make the prefix independent of the filter length. In this paper, two setups of uncoded AWGN transmission are analyzed through simulation. Both setups have in common that an OFDM primary system is overlaid by a secondary system. For that purpose, resources are made free artificially. First, a non-synchronized OFDM system is inserted into the white space. Then, the results are compared to the case when the secondary system operates with the GFDM scheme. Both setups are reviewed under the aspect of bit error performance in dependence of guard bands and various pulse shaping filter parameters. Conclusions for the primary and secondary system are drawn.", "title": "" }, { "docid": "ab0541d9ec1ea0cf7ad85d685267c142", "text": "Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.", "title": "" }, { "docid": "904e63188b0a9772f1f81bbf42be65a1", "text": "Malicious URLs have become a channel for Internet criminal activities such as drive-by-download, spamming and phishing. Applications for the detection of malicious URLs are accurate but slow (because they need to download the content or query some Internet host information). In this paper we present a novel lightweight filter based only on the URL string itself to use before existing processing methods. We run experiments on a large dataset and demonstrate a 75% reduction in workload size while retaining at least 90% of malicious URLs. Existing methods do not scale well with the hundreds of millions of URLs encountered every day as the problem is a heavily-imbalanced, large-scale binary classification problem. Our proposed method is able to handle nearly two million URLs in less than five minutes. We generate two filtering models by using lexical features and descriptive features, and then combine the filtering results. The on-line learning algorithms are applied here not only for dealing with large-scale data sets but also for fitting the very short lifetime characteristics of malicious URLs. Our filter can significantly reduce the volume of URL queries on which further analysis needs to be performed, saving both computing time and bandwidth used for content retrieval.", "title": "" }, { "docid": "31b161f4288fb2e60f2d72c384906d94", "text": "This article presents a study that aims at constructing a teaching framework for software development methods in higher education. The research field is a capstone project-based course, offered by the Technion’s Department of Computer Science, in which Extreme Programming is introduced. The research paradigm is an Action Research that involves cycles of data collection, examination, evaluation, and application of results. The research uses several research tools for data gathering, as well as several research methods for data interpretation. The article describes in detail the research background, the research method, and the gradual emergence process of a framework for teaching software development methods. As part of the comprehensive teaching framework, a set of measures is developed to assess, monitor, and improve the teaching and the actual process of software development projects.", "title": "" }, { "docid": "23bbd88d88de6b158cd89b1655216b86", "text": "This paper presents a novel algorithmic method for automatically generating personal handwriting styles of Chinese characters through an example-based approach. The method first splits a whole Chinese character into multiple constituent parts, such as strokes, radicals, and frequent character components. The algorithm then analyzes and learns the characteristics of character handwriting styles both defined in the Chinese national font standard and those exhibited in a person's own handwriting records. In such an analysis process, we adopt a parametric representation of character shapes and also examine the spatial relationships between multiple constituent components of a character. By imitating shapes of individual character components as well as the spatial relationships between them, the proposed method can automatically generate personalized handwritings following an example-based approach. To explore the quality of our automatic generation algorithm, we compare the computer generated results with the authentic human handwriting samples, which appear satisfying for entertainment or mobile applications as agreed by Chinese subjects in our user study.", "title": "" }, { "docid": "d4d52c325a33710cfa59a2067dbc553c", "text": "This paper presents an SDR (Software-Defined Radio) implementation of an FMCW (Frequency-Modulated Continuous-Wave) radar using a USRP (Universal Software Radio Peripheral) device. The tools used in the project and the architecture of implementation with FPGA real-time processing and PC off-line processing are covered. This article shows the detailed implementation of an FMCW radar using a USRP device with no external analog devices except for one amplifier and two antennas. The FMCW radar demonstrator presented in the paper has been tested in the laboratory as well as in the real environment, where the ability to detect targets such as cars moving on the roads has been successfully shown.", "title": "" }, { "docid": "96c30be2e528098e86b84b422d5a786a", "text": "The LSTM is a popular neural network model for modeling or analyzing the time-varying data. The main operation of LSTM is a matrix-vector multiplication and it becomes sparse (spMxV) due to the widely-accepted weight pruning in deep learning. This paper presents a new sparse matrix format, named CBSR, to maximize the inference speed of the LSTM accelerator. In the CBSR format, speed-up is achieved by balancing out the computation loads over PEs. Along with the new format, we present a simple network transformation to completely remove the hardware overhead incurred when using the CBSR format. Also, the detailed analysis on the impact of network size or the number of PEs is performed, which lacks in the prior work. The simulation results show 16∼38% improvement in the system performance compared to the well-known CSC/CSR format. The power analysis is also performed in 65nm CMOS technology to show 9∼22% energy savings.", "title": "" }, { "docid": "809d795cb5e5147979f8dffed44e6a44", "text": "The goal of this paper is to study the characteristics of various control architectures (e.g. centralized, hierarchical, distributed, and hybrid) for a team of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) in performing collaborative surveillance and crowd control. To this end, an overview of different control architectures is first provided covering their functionalities and interactions. Then, three major functional modules needed for crowd control are discussed under those architectures, including 1) crowd detection using computer vision algorithms, 2) crowd tracking using an enhanced information aggregation strategy, and 3) vehicles motion planning using a graph search algorithm. Depending on the architectures, these modules can be placed in the ground control center or embedded in each vehicle. To test and demonstrate characteristics of various control architectures, a testbed has been developed involving these modules and various hardware and software components, such as 1) assembled UAVs and UGV, 2) a real-time simulator (in Repast Simphony), 3) off-the-shelf ARM architecture computers (ODROID-U2/3), 4) autopilot units with GPS sensors, and 5) multipoint wireless networks using XBee. Experiments successfully demonstrate the pros and cons of the considered control architectures in terms of computational performance in responding to different system conditions (e.g. information sharing).", "title": "" }, { "docid": "c66b9dbc0321fe323a519aff49da6bb5", "text": "Stratum, the de-facto mining communication protocol used by blockchain based cryptocurrency systems, enables miners to reliably and efficiently fetch jobs from mining pool servers. In this paper we exploit Stratum’s lack of encryption to develop passive and active attacks on Bitcoin’s mining protocol, with important implications on the privacy, security and even safety of mining equipment owners. We introduce StraTap and ISP Log attacks, that infer miner earnings if given access to miner communications, or even their logs. We develop BiteCoin, an active attack that hijacks shares submitted by miners, and their associated payouts. We build BiteCoin on WireGhost, a tool we developed to hijack and surreptitiously maintain Stratum connections. Our attacks reveal that securing Stratum through pervasive encryption is not only undesirable (due to large overheads), but also ineffective: an adversary can predict miner earnings even when given access to only packet timestamps. Instead, we devise Bedrock, a minimalistic Stratum extension that protects the privacy and security of mining participants. We introduce and leverage the mining cookie concept, a secret that each miner shares with the pool and includes in its puzzle computations, and that prevents attackers from reconstructing or hijacking the puzzles. We have implemented our attacks and collected 138MB of Stratum protocol traffic from mining equipment in the US and Venezuela. We show that Bedrock is resilient to active attacks even when an adversary breaks the crypto constructs it uses. Bedrock imposes a daily overhead of 12.03s on a single pool server that handles mining traffic from 16,000 miners.", "title": "" }, { "docid": "74da0fe221dd6a578544e6b4896ef60e", "text": "This paper outlines a new approach to the study of power, that of the sociology of translation. Starting from three principles, those of agnosticism (impartiality between actors engaged in controversy), generalised symmetry (the commitment to explain conflicting viewpoints in the same terms) and free association (the abandonment of all a priori distinctions between the natural and the social), the paper describes a scientific and economic controversy about the causes for the decline in the population of scallops in St. Brieuc Bay and the attempts by three marine biologists to develop a conservation strategy for that population. Four ‘moments’ of translation are discerned in the attempts by these researchers to impose themselves and their definition of the situation on others: (a) problematisation: the researchers sought to become indispensable to other actors in the drama by defining the nature and the problems of the latter and then suggesting that these would be resolved if the actors negotiated the ‘obligatory passage point’ of the researchers’ programme of investigation; (b) interessement: a series of processes by which the researchers sought to lock the other actors into the roles that had been proposed for them in that programme; (c) enrolment: a set of strategies in which the researchers sought to define and interrelate the various roles they had allocated to others; (d) mobilisation: a set of methods used by the researchers to ensure that supposed spokesmen for various relevant collectivities were properly able to represent those collectivities and not betrayed by the latter. In conclusion it is noted that translation is a process, never a completed accomplishment, and it may (as in the empirical case considered) fail.", "title": "" } ]
scidocsrr
2255e1fb003f3cc7b3e6c8030276c8f9
Non-contact video-based pulse rate measurement on a mobile service robot
[ { "docid": "2531d8d05d262c544a25dbffb7b43d67", "text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.", "title": "" } ]
[ { "docid": "44672e9dc60639488800ad4ae952f272", "text": "The GPS technology and new forms of urban geography have changed the paradigm for mobile services. As such, the abundant availability of GPS traces has enabled new ways of doing taxi business. Indeed, recent efforts have been made on developing mobile recommender systems for taxi drivers using Taxi GPS traces. These systems can recommend a sequence of pick-up points for the purpose of maximizing the probability of identifying a customer with the shortest driving distance. However, in the real world, the income of taxi drivers is strongly correlated with the effective driving hours. In other words, it is more critical for taxi drivers to know the actual driving routes to minimize the driving time before finding a customer. To this end, in this paper, we propose to develop a cost-effective recommender system for taxi drivers. The design goal is to maximize their profits when following the recommended routes for finding passengers. Specifically, we first design a net profit objective function for evaluating the potential profits of the driving routes. Then, we develop a graph representation of road networks by mining the historical taxi GPS traces and provide a Brute-Force strategy to generate optimal driving route for recommendation. However, a critical challenge along this line is the high computational cost of the graph based approach. Therefore, we develop a novel recursion strategy based on the special form of the net profit function for searching optimal candidate routes efficiently. Particularly, instead of recommending a sequence of pick-up points and letting the driver decide how to get to those points, our recommender system is capable of providing an entire driving route, and the drivers are able to find a customer for the largest potential profit by following the recommendations. This makes our recommender system more practical and profitable than other existing recommender systems. Finally, we carry out extensive experiments on a real-world data set collected from the San Francisco Bay area and the experimental results clearly validate the effectiveness of the proposed recommender system.", "title": "" }, { "docid": "6224f4f3541e9cd340498e92a380ad3f", "text": "A personal story: From philosophy to software.", "title": "" }, { "docid": "da0de29348f5414f33bacad850fa79d1", "text": "This paper presents a construction algorithm for the short block irregular low-density parity-check (LDPC) codes. By applying a magic square theorem as a part of the matrix construction, a newly developed algorithm, the so-called Magic Square Based Algorithm (MSBA), is obtained. The modified array codes are focused on in this study since the reduction of 1s can lead to simple encoding and decoding schemes. Simulation results based on AWGN channels show that with the code rate of 0.8 and SNR 5 dB, the BER of 10 can be obtained whilst the number of decoding iteration is relatively low.", "title": "" }, { "docid": "d8272965f75b55bafb29c0eb4892f813", "text": "One expensive step when defining crowdsourcing tasks is to define the examples and control questions for instructing the crowd workers. In this paper, we introduce a self-training strategy for crowdsourcing. The main idea is to use an automatic classifier, trained on weakly supervised data, to select examples associated with high confidence. These are used by our automatic agent to explain the task to crowd workers with a question answering approach. We compared our relation extraction system trained with data annotated (i) with distant supervision and (ii) by workers instructed with our approach. The analysis shows that our method relatively improves the relation extraction system by about 11% in F1.", "title": "" }, { "docid": "2841406ba32b534bb85fb970f2a00e58", "text": "We present WHATSUP, a collaborative filtering system for disseminating news items in a large-scale dynamic setting with no central authority. WHATSUP constructs an implicit social network based on user profiles that express the opinions of users about the news items they receive (like-dislike). Users with similar tastes are clustered using a similarity metric reflecting long-standing and emerging (dis)interests. News items are disseminated through a novel heterogeneous gossip protocol that (1) biases the orientation of its targets towards those with similar interests, and (2) amplifies dissemination based on the level of interest in every news item. We report on an extensive evaluation of WHATSUP through (a) simulations, (b) a ModelNet emulation on a cluster, and (c) a PlanetLab deployment based on real datasets. We show that WHATSUP outperforms various alternatives in terms of accurate and complete delivery of relevant news items while preserving the fundamental advantages of standard gossip: namely, simplicity of deployment and robustness.", "title": "" }, { "docid": "ecc31d1d7616e014a3a032d14e149e9b", "text": "It has been proposed that sexual stimuli will be processed in a comparable manner to other evolutionarily meaningful stimuli (such as spiders or snakes) and therefore elicit an attentional bias and more attentional engagement (Spiering and Everaerd, In E. Janssen (Ed.), The psychophysiology of sex (pp. 166-183). Bloomington: Indiana University Press, 2007). To investigate early and late attentional processes while looking at sexual stimuli, heterosexual men (n = 12) viewed pairs of sexually preferred (images of women) and sexually non-preferred images (images of girls, boys or men), while eye movements were measured. Early attentional processing (initial orienting) was assessed by the number of first fixations and late attentional processing (maintenance of attention) was assessed by relative fixation time. Results showed that relative fixation time was significantly longer for sexually preferred stimuli than for sexually non-preferred stimuli. Furthermore, the first fixation was more often directed towards the preferred sexual stimulus, when simultaneously presented with a non-sexually preferred stimulus. Thus, the current study showed for the first time an attentional bias to sexually relevant stimuli when presented simultaneously with sexually irrelevant pictures. This finding, along with the discovery that heterosexual men maintained their attention to sexually relevant stimuli, highlights the importance of investigating early and late attentional processes while viewing sexual stimuli. Furthermore, the current study showed that sexually relevant stimuli are favored by the human attentional system.", "title": "" }, { "docid": "63dc375e505ceb5488a06306775969ba", "text": "N-Methyl-d-aspartate (NMDA) receptors belong to the family of ionotropic glutamate receptors, which mediate most excitatory synaptic transmission in mammalian brains. Calcium permeation triggered by activation of NMDA receptors is the pivotal event for initiation of neuronal plasticity. Here, we show the crystal structure of the intact heterotetrameric GluN1-GluN2B NMDA receptor ion channel at 4 angstroms. The NMDA receptors are arranged as a dimer of GluN1-GluN2B heterodimers with the twofold symmetry axis running through the entire molecule composed of an amino terminal domain (ATD), a ligand-binding domain (LBD), and a transmembrane domain (TMD). The ATD and LBD are much more highly packed in the NMDA receptors than non-NMDA receptors, which may explain why ATD regulates ion channel activity in NMDA receptors but not in non-NMDA receptors.", "title": "" }, { "docid": "6d227bbf8df90274f44a26d9c269c663", "text": "Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an N-gram-based approach to text categorization that is tolerant of textual errors. The system is small, fast and robust. This system worked very well for language classification, achieving in one test a 99.8% correct classification rate on Usenet newsgroup articles written in different languages. The system also worked reasonably well for classifying articles from a number of different computer-oriented newsgroups according to subject, achieving as high as an 80% correct classification rate. There are also several obvious directions for improving the system’s classification performance in those cases where it did not do as well. The system is based on calculating and comparing profiles of N-gram frequencies. First, we use the system to compute profiles on training set data that represent the various categories, e.g., language samples or newsgroup content samples. Then the system computes a profile for a particular document that is to be classified. Finally, the system computes a distance measure between the document’s profile and each of the category profiles. The system selects the category whose profile has the smallest distance to the document’s profile. The profiles involved are quite small, typically 10K bytes for a category training set, and less than 4K bytes for an individual document. Using N-gram frequency profiles provides a simple and reliable way to categorize documents in a wide range of classification tasks.", "title": "" }, { "docid": "86c0547368eb9003beed2ba7eefc75a4", "text": "Electronic social media offers new opportunities for informal communication in written language, while at the same time, providing new datasets that allow researchers to document dialect variation from records of natural communication among millions of individuals. The unprecedented scale of this data enables the application of quantitative methods to automatically discover the lexical variables that distinguish the language of geographical areas such as cities. This can be paired with the segmentation of geographical space into dialect regions, within the context of a single joint statistical model — thus simultaneously identifying coherent dialect regions and the words that distinguish them. Finally, a diachronic analysis reveals rapid changes in the geographical distribution of these lexical features, suggesting that statistical analysis of social media may offer new insights on the diffusion of lexical change.", "title": "" }, { "docid": "149ffd270f39a330f4896c7d3aa290be", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "1b030e734e3ddfb5e612b1adc651b812", "text": "Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes.\n The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.", "title": "" }, { "docid": "8d8e5c06269e366044f0e3d5c3be19d0", "text": "A social network (SN) is a network containing nodes – social entities (people or groups of people) and links between these nodes. Social networks are examples of more general concept of complex networks and SNs are usually free-scale and have power distribution of node degree. Overall, several types of social networks can be enumerated: (i) simple SNs, (ii) multi-layered SNs (with many links between a pair of nodes), (iii) bipartite or multi-modal, heterogeneous SNs (with two or many different types of nodes), (iv) multidimensional SNs (reflecting the data warehousing multidimensional modelling concept), and some more specific like (v) temporal SNs, (vi) large scale SNs, and (vii) virtual SNs. For all these social networks suitable analytical methods may be applied commonly called social network analysis (SNA). They cover in particular: appropriate structural measures, efficient algorithms for their calculation, statistics and data mining methods, e.g. extraction of social communities (clustering). Some types of social networks have their own measures and methods developed. Several real application domains of SNA may be distinguished: classification of nodes for the purpose of marketing, evaluation of organizational structure versus communication structures in companies, recommender systems for hidden knowledge acquisition and for user support in web 2.0, analysis of social groups on web forums and prediction of their evolution. The above SNA methods and applications will be discussed in some details. J. Pokorný, V. Snášel, K. Richta (Eds.): Dateso 2012, pp. 151–151, ISBN 978-80-7378-171-2.", "title": "" }, { "docid": "2f23d51ffd54a6502eea07883709d016", "text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.", "title": "" }, { "docid": "9998497c000fa194bf414604ff0d69b2", "text": "By embedding shorting vias, a dual-feed and dual-band L-probe patch antenna, with flexible frequency ratio and relatively small lateral size, is proposed. Dual resonant frequency bands are produced by two radiating patches located in different layers, with the lower patch supported by shorting vias. The measured impedance bandwidths, determined by 10 dB return loss, of the two operating bands reach 26.6% and 42.2%, respectively. Also the radiation patterns are stable over both operating bands. Simulation results are compared well with experiments. This antenna is highly suitable to be used as a base station antenna for multiband operation.", "title": "" }, { "docid": "c340cbb5f6b062caeed570dc2329e482", "text": "We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the \"address-event representation\" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.", "title": "" }, { "docid": "0fca0826e166ddbd4c26fe16086ff7ec", "text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.", "title": "" }, { "docid": "a5296748b0a93696e7b15f7db9d68384", "text": "Microscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data.", "title": "" }, { "docid": "1a45d5e0ccc4816c0c64c7e25e7be4e3", "text": "The interpolation of correspondences (EpicFlow) was widely used for optical flow estimation in most-recent works. It has the advantage of edge-preserving and efficiency. However, it is vulnerable to input matching noise, which is inevitable in modern matching techniques. In this paper, we present a Robust Interpolation method of Correspondences (called RicFlow) to overcome the weakness. First, the scene is over-segmented into superpixels to revitalize an early idea of piecewise flow model. Then, each model is estimated robustly from its support neighbors based on a graph constructed on superpixels. We propose a propagation mechanism among the pieces in the estimation of models. The propagation of models is significantly more efficient than the independent estimation of each model, yet retains the accuracy. Extensive experiments on three public datasets demonstrate that RicFlow is more robust than EpicFlow, and it outperforms state-of-the-art methods.", "title": "" }, { "docid": "904c8b4be916745c7d1f0777c2ae1062", "text": "In this paper, we address the problem of continuous access control enforcement in dynamic data stream environments, where both data and query security restrictions may potentially change in real-time. We present FENCE framework that ffectively addresses this problem. The distinguishing characteristics of FENCE include: (1) the stream-centric approach to security, (2) the symmetric model for security settings of both continuous queries and streaming data, and (3) two alternative security-aware query processing approaches that can optimize query execution based on regular and security-related selectivities. In FENCE, both data and query security restrictions are modeled symmetrically in the form of security metadata, called \"security punctuations\" embedded inside data streams. We distinguish between two types of security punctuations, namely, the data security punctuations (or short, dsps) which represent the access control policies of the streaming data, and the query security punctuations (or short, qsps) which describe the access authorizations of the continuous queries. We also present our encoding method to support XACML(eXtensible Access Control Markup Language) standard. We have implemented FENCE in a prototype DSMS and present our performance evaluation. The results of our experimental study show that FENCE's approach has low overhead and can give great performance benefits compared to the alternative security solutions for streaming environments.", "title": "" }, { "docid": "8fd3c6231e8c8522157439edc7b7344f", "text": "We are implementing ADAPT, a cognitive architecture for a Pioneer mobile robot, to give the robot the full range of cognitive abilities including perception, use of natural language, learning and the ability to solve complex problems. Our perspective is that an architecture based on a unified theory of robot cognition has the best chance of attaining human-level performance. Existing work in cognitive modeling has accomplished much in the construction of such unified cognitive architectures in areas other than robotics; however, there are major respects in which these architectures are inadequate for robot cognition. This paper examines two major inadequacies of current cognitive architectures for robotics: the absence of support for true concurrency and for active", "title": "" } ]
scidocsrr
fa05059ee4caed8a9d565fc0ec0d0b5b
Context-Sensitive Twitter Sentiment Classification Using Neural Network
[ { "docid": "e95d41b322dccf7f791ed88a9f2ccced", "text": "Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVMhmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources.", "title": "" } ]
[ { "docid": "2c18433b18421cd9e0f28605809a8665", "text": "Cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph have gained increasing attention over the last years and are starting to be deployed within various use cases. However, the content of such knowledge bases is far from being complete, far from always being correct, and suffers from deprecation (i.e. population numbers become outdated after some time). Hence, there are efforts to leverage various types of Web data to complement, update and extend such knowledge bases. A source of Web data that potentially provides a very wide coverage are millions of relational HTML tables that are found on the Web. The existing work on using data from Web tables to augment cross-domain knowledge bases reports only aggregated performance numbers. The actual content of the Web tables and the topical areas of the knowledge bases that can be complemented using the tables remain unclear. In this paper, we match a large, publicly available Web table corpus to the DBpedia knowledge base. Based on the matching results, we profile the potential of Web tables for augmenting different parts of cross-domain knowledge bases and report detailed statistics about classes, properties, and instances for which missing values can be filled using Web table data as evidence. In order to estimate the potential quality of the new values, we empirically examine the Local Closed World Assumption and use it to determine the maximal number of correct facts that an ideal data fusion strategy could generate. Using this as ground truth, we compare three data fusion strategies and conclude that knowledge-based trust outperforms PageRankand voting-based fusion.", "title": "" }, { "docid": "fac131e435b5dfe9a7cd839b07bec139", "text": "The past two decades have witnessed an explosion in the identification, largely by positional cloning, of genes associated with mendelian diseases. The roughly 1,200 genes that have been characterized have clarified our understanding of the molecular basis of human genetic disease. The principles derived from these successes should be applied now to strategies aimed at finding the considerably more elusive genes that underlie complex disease phenotypes. The distribution of types of mutation in mendelian disease genes argues for serious consideration of the early application of a genomic-scale sequence-based approach to association studies and against complete reliance on a positional cloning approach based on a map of anonymous single nucleotide polymorphism haplotypes.", "title": "" }, { "docid": "6291caf1fae634c6e9ce8a22dab35cce", "text": "Effective home energy management requires data on the current power consumption of devices in the home. Individually monitoring every appliance is costly and inconvenient. Non-Intrusive Load Monitoring (NILM) promises to provide individual electrical load information from aggregate power measurements. Application of NILM in residential settings has been constrained by the data provided by utility billing smart meters. Current utility billing smart meters do not deliver data that supports quantifying the harmonic content in the 60 Hz waveforms. Research in NILM has a critical need for a low-cost sensor system to collect energy data with fast sampling and significant precision to demonstrate actual data requirements. Implementation of cost-effective NILM in a residential consumer context requires real-time processing of this data to identify individual loads. This paper describes a system providing a powerful and flexible platform, supporting user configuration of sampling rates and amplitude resolution up to 65 kHz and up to 24 bits respectively. The internal processor is also capable of running NILM algorithms in real time on the sampled measurements. Using this prototype, real time load identification can be provided to the consumer for control, visualization, feedback, and demand response implications.", "title": "" }, { "docid": "803a5dbedf309cec97d130438e687002", "text": "Affective computing is a newly trend the main goal is exploring the human emotion things. The human emotion is leaded into a key position of behavior clue, and hence it should be included within the sensible model when an intelligent system aims to simulate or forecast human responses. This research utilizes decision tree one of data mining model to classify the emotion. This research integrates and manipulates the Thayer's emotion mode and color theory into the decision tree model, C4.5 for an innovative emotion detecting system. This paper uses 320 data in four emotion groups to train and build the decision tree for verifying the accuracy in this system. The result reveals that C4.5 decision tree model can be effective classified the emotion by feedback color from human. For the further research, colors will not the only human behavior clues, even more than all the factors from human interaction.", "title": "" }, { "docid": "fba35e7409ab7bf9a760f4aeb007a77a", "text": "Sparse code multiple access (SCMA) is a promising uplink multiple access technique that can achieve superior spectral efficiency, provided that multidimensional codebooks are carefully designed. In this letter, we investigate the multiuser codebook design for SCMA systems over Rayleigh fading channels. The criterion of the proposed design is derived from the cutoff rate analysis of the equivalent multiple-input multiple-output system. Furthermore, new codebooks with signal-space diversity are suggested, while simulations show that this criterion is efficient in developing codebooks with substantial performance improvement, compared with the existing ones.", "title": "" }, { "docid": "b5cc41f689a1792b544ac66a82152993", "text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: thanan@siit.tu.ac.th (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7cef2fac422d9fc3c3ffbc130831b522", "text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.", "title": "" }, { "docid": "4adee6dc3dfc57c4180c4107e0af89a8", "text": "Objective\nSocial media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media.\n\n\nMaterials and Methods\nWe developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training.\n\n\nResults\nOur best-performing RNN model used pretrained word embeddings created from a large, non-domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision.\n\n\nDiscussion\nOur model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models.\n\n\nConclusion\nADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets.", "title": "" }, { "docid": "9b5877847bedecd73a8c2f0d6f832641", "text": "Traditional, more biochemically motivated approaches to chemical design and drug discovery are notoriously complex and costly processes. The space of all synthesizable molecules is far too large to exhaustively search any meaningful subset for interesting novel drug and molecule proposals, and the lack of any particularly informative and manageable structure to this search space makes the very task of defining interesting subsets a difficult problem in itself. Recent years have seen the proposal and rapid development of alternative, machine learning-based methods for vastly simplifying the search problem specified in chemical design and drug discovery. In this work, I build upon this existing literature exploring the possibility of automatic chemical design and propose a novel generative model for producing a diverse set of valid new molecules. The proposed molecular graph variational autoencoder model achieves comparable performance across standard metrics to the state-of-the-art in this problem area and is capable of regularly generating valid molecule proposals similar but distinctly different from known sets of interesting molecules. While an interesting result in terms of addressing one of the core issues with machine learning-based approaches to automatic chemical design, further research in this direction should aim to optimize for more biochemically motivated objectives and be more informed by the ultimate utility of such models to the biochemical field.", "title": "" }, { "docid": "a54f912c14b44fc458ed8de9e19a5e82", "text": "Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.", "title": "" }, { "docid": "7ef86793639ce209fa168f4368854b5e", "text": "In this paper, we compare learning techniques based on statistical classification to traditional methods of relevance feedback for the document routing problem. We consider three classification techniques which have decision rules that are derived via explicit error minimization linear discriminant analysis, logistic regression, and neuraf networks. We demonstrate that the classifiers perform 1015% better than relevance feedback via Rocchio expansion for the TREC-2 and TREC-3 routing tasks. Error minimization is difficult in high-dimensional feature spaces because the convergence process is slow and the models ~e prone to overfitting. We use two different strategies, latent semantic indexing and optimaJ term selection, to reduce the number of features. Our results indicate that features based on latent semantic indexing are more effective for techniques such as linear discriminant anafysis and logistic regression, which have no way to protect against overfitting. Neural networks perform equally well with either set of features and can take advantage of the additional information available when both feature sets are used as input.", "title": "" }, { "docid": "5b3ba0fc32229e78cfde49716ce909bd", "text": "Previous work by Lin et al. (2011) demonstrated the effectiveness of using discourse relations for evaluating text coherence. However, their work was based on discourse relations annotated in accordance with the Penn Discourse Treebank (PDTB) (Prasad et al., 2008), which encodes only very shallow discourse structures; therefore, they cannot capture long-distance discourse dependencies. In this paper, we study the impact of deep discourse structures for the task of coherence evaluation, using two approaches: (1) We compare a model with features derived from discourse relations in the style of Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), which annotate the full hierarchical discourse structure, against our re-implementation of Lin et al.’s model; (2) We compare a model encoded using only shallow RST-style discourse relations, against the one encoded using the complete set of RST-style discourse relations. With an evaluation on two tasks, we show that deep discourse structures are truly useful for better differentiation of text coherence, and in general, RST-style encoding is more powerful than PDTBstyle encoding in these settings.", "title": "" }, { "docid": "f9ee82dcf1cce6d41a7f106436ee3a7d", "text": "The Automatic Identification System (AIS) is based on VHF radio transmissions of ships' identity, position, speed and heading, in addition to other key parameters. In 2004, the Norwegian Defence Research Establishment (FFI) undertook studies to evaluate if the AIS signals could be detected in low Earth orbit. Since then, the interest in Space-Based AIS reception has grown significantly, and both public and private sector organizations have established programs to study the issue, and demonstrate such a capability in orbit. FFI is conducting two such programs. The objective of the first program was to launch a nano-satellite equipped with an AIS receiver into a near polar orbit, to demonstrate Space-Based AIS reception at high latitudes. The satellite was launched from India 12th July 2010. Even though the satellite has not finished commissioning, the receiver is operated with real-time transmission of received AIS data to the Norwegian Coastal Administration. The second program is an ESA-funded project to operate an AIS receiver on the European Columbus module of the International Space Station. Mounting of the equipment, the NORAIS receiver, was completed in April 2010. Currently, the AIS receiver has operated for more than three months, picking up several million AIS messages from more than 60 000 ship identities. In this paper, we will present experience gained with the space-based AIS systems, highlight aspects of tracking ships throughout their voyage, and comment on possible contributions to port security.", "title": "" }, { "docid": "53b6315bfb8fcfef651dd83138b11378", "text": "We illustrate the correspondence between uncertainty sets in robust optimization and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of these risk measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can construct coherent risk measures. Our results have implications for efficient portfolio optimization under different measures of risk. Department of Mathematics, National University of Singapore, Singapore 117543. Email: matkbn@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS startup grants R-146-050-070-133 & R146-050-070-101. Division of Mathematics and Sciences, Babson College, Babson Park, MA 02457, USA. E-mail: dpachamanova@babson.edu. Research supported by the Gill grant from the Babson College Board of Research. NUS Business School, National University of Singapore. Email: dscsimm@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS academic research grant R-314-000-066-122 and R-314-000-068-122.", "title": "" }, { "docid": "fb484e0b6b5e82984a3e1176dfae8d4c", "text": "In this paper, we describe how we are using text mining solutions used to enhance the production of systematic reviews. This collaborative project also serves as a proof of concept and as a testbed for deriving requirements for the development of more generally applicable text mining tools and services.", "title": "" }, { "docid": "4c951d6be8b49c9931492b5f89009fb3", "text": "Tooth preparations for fixed prosthetic restorations can be done in different ways, basically of two kinds: preparation with a defined margin and the so-called vertical preparation or feather edge. The latter was originally used for prosthetics on teeth treated with resective surgery for periodontal disease. In this article, the author presents a prosthetic technique for periodontally healthy teeth using feather edge preparation in a flapless approach in both esthetic and posterior areas with ceramometal and zirconia restorations, achieving high quality clinical and esthetic results in terms of soft tissue stability at the prosthetic/tissue interface, both in the short and in the long term (clinical follow-up up to fifteen years). Moreover, the BOPT technique, if compared to other preparation techniques (chamfer, shoulder, etc), is simpler and faster when in preparation impression taking, temporary crowns' relining and creating the crowns' profiles up to the final prosthetic restoration.", "title": "" }, { "docid": "fd568ae231543517bd660d37c0b71570", "text": "Chemical and electrical interaction within and between cells is well established. Just the opposite is true about cellular interactions via other physical fields. The most probable candidate for an other form of cellular interaction is the electromagnetic field. We review theories and experiments on how cells can generate and detect electromagnetic fields generally, and if the cell-generated electromagnetic field can mediate cellular interactions. We do not limit here ourselves to specialized electro-excitable cells. Rather we describe physical processes that are of a more general nature and probably present in almost every type of living cell. The spectral range included is broad; from kHz to the visible part of the electromagnetic spectrum. We show that there is a rather large number of theories on how cells can generate and detect electromagnetic fields and discuss experimental evidence on electromagnetic cellular interactions in the modern scientific literature. Although small, it is continuously accumulating.", "title": "" }, { "docid": "d9176322068e6ca207ae913b1164b3da", "text": "Topic Detection and Tracking (TDT) is a variant of classiication in which the classes are not known or xed in advance. Consider for example an incoming stream of news articles or email messages that are to be classiied by topic; new classes must be created as new topics arise. The problem is a challenging one for machine learning. Instances of new topics must be recognized as not belonging to any of the existing classes (detection), and instances of old topics must be correctly classiied (tracking)|often with extremely little training data per class. This paper proposes a new approach to TDT based on probabilis-tic, generative models. Strong statistical techniques are used to address the many challenges: hierarchical shrinkage for sparse data, statistical \\garbage collection\" for new event detection, clustering in time to separate the diierent events of a common topic, and deterministic anneal-ing for creating the hierarchy. Preliminary experimental results show promise.", "title": "" }, { "docid": "7dbb697a8793027d8aa55202989cb99e", "text": "We consider the problem of finding the minimizer of a function f : R → R of the finite-sum form min f(w) = 1/n ∑n i fi(w). This problem has been studied intensively in recent years in the field of machine learning (ML). One promising approach for large-scale data is to use a stochastic optimization algorithm to solve the problem. SGDLibrary is a readable, flexible and extensible pure-MATLAB library of a collection of stochastic optimization algorithms. The purpose of the library is to provide researchers and implementers a comprehensive evaluation environment for the use of these algorithms on various ML problems. Published in Journal of Machine Learning Research (JMLR) entitled “SGDLibrary: A MATLAB library for stochastic gradient optimization algorithms” [1]", "title": "" }, { "docid": "f840350d14a99f3da40729cfe6d56ef5", "text": "This paper presents a sub-radix-2 redundant architecture to improve the performance of switched-capacitor successive-approximation-register (SAR) analog-to-digital converters (ADCs). The redundancy not only guarantees digitally correctable static nonlinearities of the converter, it also offers means to combat dynamic errors in the conversion process, and thus, accelerating the speed of the SAR architecture. A perturbation-based digital calibration technique is also described that closely couples with the architecture choice to accomplish simultaneous identification of multiple capacitor mismatch errors of the ADC, enabling the downsizing of all sampling capacitors to save power and silicon area. A 12-bit prototype measured a Nyquist 70.1-dB signal-to-noise-plus-distortion ratio (SNDR) and a Nyquist 90.3-dB spurious free dynamic range (SFDR) at 22.5 MS/s, while dissipating 3.0-mW power from a 1.2-V supply and occupying 0.06-mm2 silicon area in a 0.13-μm CMOS process. The figure of merit (FoM) of this ADC is 51.3 fJ/step measured at 22.5 MS/s and 36.7 fJ/step at 45 MS/s.", "title": "" } ]
scidocsrr
46c04eddbe1e50d88d2bb9c45dc674f8
Aspect Term and Opinion Target Extraction from Web Product Reviews using Semi-markov Conditional Random Fields with Word Embeddings as Features
[ { "docid": "69d65a994d5b5c412ee6b8a266cb9b31", "text": "This paper describes our system used in the Aspect Based Sentiment Analysis Task 4 at the SemEval-2014. Our system consists of two components to address two of the subtasks respectively: a Conditional Random Field (CRF) based classifier for Aspect Term Extraction (ATE) and a linear classifier for Aspect Term Polarity Classification (ATP). For the ATE subtask, we implement a variety of lexicon, syntactic and semantic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances in ATE, ranking 1st (among 28 submissions) and 2rd (among 27 submissions) for the restaurant and laptop domain respectively.", "title": "" }, { "docid": "3627ee0e7be9c6d664dea1912c0b91d4", "text": "Given a set of texts discussing a particular entity (e.g., customer reviews of a smartphone), aspect based sentiment analysis (ABSA) identifies prominent aspects of the entity (e.g., battery, screen) and an average sentiment score per aspect. We focus on aspect term extraction (ATE), one of the core processing stages of ABSA that extracts terms naming aspects. We make publicly available three new ATE datasets, arguing that they are better than previously available ones. We also introduce new evaluation measures for ATE, again arguing that they are better than previously used ones. Finally, we show how a popular unsupervised ATE method can be improved by using continuous space vector representations of words and phrases.", "title": "" } ]
[ { "docid": "3d9c02413c80913cb32b5094dcf61843", "text": "There is an explosion of youth subscriptions to original content-media-sharing Web sites such as YouTube. These Web sites combine media production and distribution with social networking features, making them an ideal place to create, connect, collaborate, and circulate. By encouraging youth to become media creators and social networkers, new media platforms such as YouTube offer a participatory culture in which youth can develop, interact, and learn. As youth development researchers, we must be cognizant of this context and critically examine what this platform offers that might be unique to (or redundant of) typical adolescent experiences in other developmental contexts.", "title": "" }, { "docid": "8e26d11fa1ab330a429f072c1ac17fe2", "text": "The objective of this study was to report the signalment, indications for surgery, postoperative complications and outcome in dogs undergoing penile amputation and scrotal urethrostomy. Medical records of three surgical referral facilities were reviewed for dogs undergoing penile amputation and scrotal urethrostomy between January 2003 and July 2010. Data collected included signalment, presenting signs, indication for penile amputation, surgical technique, postoperative complications and long-term outcome. Eighteen dogs were included in the study. Indications for surgery were treatment of neoplasia (n=6), external or unknown penile trauma (n=4), penile trauma or necrosis associated with urethral obstruction with calculi (n=3), priapism (n=4) and balanoposthitis (n=1). All dogs suffered mild postoperative haemorrhage (posturination and/or spontaneous) from the urethrostomy stoma for up to 21 days (mean 5.5 days). Four dogs had minor complications recorded at suture removal (minor dehiscence (n=1), mild bruising and swelling around the urethrostomy site and mild haemorrhage at suture removal (n=2), and granulation at the edge of stoma (n=1)). One dog had a major complication (wound dehiscence and subsequent stricture of the stoma). Long-term outcome was excellent in all dogs with non-neoplastic disease. Local tumour recurrence and/or metastatic disease occurred within five to 12 months of surgery in two dogs undergoing penile amputation for the treatment of neoplasia. Both dogs were euthanased.", "title": "" }, { "docid": "5f48f30d7b3dff5e302db639cb919c30", "text": "Multi-band/multi-system mobile phones require a complex RF-frontend architecture. Complexity has increased to a point where adding switches and whole signal branches for an additional band is no longer cost effective. Alternative concepts involve ‘converged’ power amplifiers and switching concepts supporting those. Filters and duplexers play a key role in converged architectures and their requirements will be reviewed. Challenges that arise from combining filter functions for different wireless standards will be pointed out. Existing RF-filter technologies are based on high-Q acoustic resonators realized either in Surface Acoustic Wave (SAW) or Bulk-Acoustic-Wave (BAW) technology and do not allow changing the frequency characteristics on the fly. Concepts for tunable RF-filters - pursuing the ‘holy grail’ - will be discussed and an overview on the status of this matter will be presented.", "title": "" }, { "docid": "6981598efd4a70f669b5abdca47b7ea1", "text": "The in-flight alignment is a critical stage for airborne inertial navigation system/Global Positioning System (INS/GPS) applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain a satisfying performance. Due to the airborne dynamics, the in-flight alignment is much more difficult than the alignment on the ground. An optimization-based coarse alignment approach that uses GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae is proposed. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to 1 deg accuracy in 10 s. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.", "title": "" }, { "docid": "eace2242d7556f47f91cd57c73728550", "text": "The success of mobile robots, and particularly of those interfacing with humans in daily environments (e.g., assistant robots), relies on the ability to manipulate information beyond simple spatial relations. We are interested in semantic information, which gives meaning to spatial information like images or geometric maps. We present a multi-hierarchical approach to enable a mobile robot to acquire semantic information from its sensors, and to use it for navigation tasks. In our approach, the link between spatial and semantic information is established via anchoring. We show experiments on a real mobile robot that demonstrate its ability to use and infer new semantic information from its environment, improving its operation.", "title": "" }, { "docid": "eba5ef77b594703c96c0e2911fcce7b0", "text": "Deep Neural Network Hidden Markov Models, or DNN-HMMs, are recently very promising acoustic models achieving good speech recognition results over Gaussian mixture model based HMMs (GMM-HMMs). In this paper, for emotion recognition from speech, we investigate DNN-HMMs with restricted Boltzmann Machine (RBM) based unsupervised pre-training, and DNN-HMMs with discriminative pre-training. Emotion recognition experiments are carried out on these two models on the eNTERFACE'05 database and Berlin database, respectively, and results are compared with those from the GMM-HMMs, the shallow-NN-HMMs with two layers, as well as the Multi-layer Perceptrons HMMs (MLP-HMMs). Experimental results show that when the numbers of the hidden layers as well hidden units are properly set, the DNN could extend the labeling ability of GMM-HMM. Among all the models, the DNN-HMMs with discriminative pre-training obtain the best results. For example, for the eNTERFACE'05 database, the recognition accuracy improves 12.22% from the DNN-HMMs with unsupervised pre-training, 11.67% from the GMM-HMMs, 10.56% from the MLP-HMMs, and even 17.22% from the shallow-NN-HMMs, respectively.", "title": "" }, { "docid": "e94c9f0ef8e696a1b2e85f18f98d2e36", "text": "Driven by pervasive mobile devices and ubiquitous wireless communication networks, mobile cloud computing emerges as an appealing paradigm to accommodate demands for running power-hungry or computation-intensive applications over resource-constrained mobile devices. Cloudlets that move available resources closer to the network edge offer a promising architecture to support real-time applications, such as online gaming and speech recognition. To stimulate service provisioning by cloudlets, it is essential to design an incentive mechanism that charges mobile devices and rewards cloudlets. Although auction has been considered as a promising form for incentive, it is challenging to design an auction mechanism that holds certain desirable properties for the cloudlet scenario. In this paper, we propose an incentive-compatible auction mechanism (ICAM) for the resource trading between the mobile devices as service users (buyers) and cloudlets as service providers (sellers). ICAM can effectively allocate cloudlets to satisfy the service demands of mobile devices and determine the pricing. Both the theoretical analysis and the numerical results show that the ICAM guarantees desired properties with respect to individual rationality, budget balance and truthfulness (incentive compatibility) for both the buyers and the sellers, and computational efficiency.", "title": "" }, { "docid": "7c9e89cb3384a34195fd6035cd2e75a0", "text": "Manual analysis of pedestrians and crowds is often impractical for massive datasets of surveillance videos. Automatic tracking of humans is one of the essential abilities for computerized analysis of such videos. In this keynote paper, we present two state of the art methods for automatic pedestrian tracking in videos with low and high crowd density. For videos with low density, first we detect each person using a part-based human detector. Then, we employ a global data association method based on Generalized Graphs for tracking each individual in the whole video. In videos with high crowd-density, we track individuals using a scene structured force model and crowd flow modeling. Additionally, we present an alternative approach which utilizes contextual information without the need to learn the structure of the scene. Performed evaluations show the presented methods outperform the currently available algorithms on several benchmarks.", "title": "" }, { "docid": "9861dd523aee4baca85a1fdb53aff4d1", "text": "We address the task of hierarchical multi-label classification (HMC). HMC is a task of structured output prediction where the classes are organized into a hierarchy and an instance may belong to multiple classes. In many problems, such as gene function prediction or prediction of ecological community structure, classes inherently follow these constraints. The potential for application of HMC was recognized by many researchers and several such methods were proposed and demonstrated to achieve good predictive performances in the past. However, there is no clear understanding when is favorable to consider such relationships (hierarchical and multi-label) among classes, and when this presents unnecessary burden for classification methods. To this end, we perform a detailed comparative study over 8 datasets that have HMC properties. We investigate two important influences in HMC: the multiple labels per example and the information about the hierarchy. More specifically, we consider four machine learning tasks: multi-label classification, hierarchical multi-label classification, single-label classification and hierarchical single-label classification. To construct the predictive models, we use predictive clustering trees (a generalized form of decision trees), which are able to tackle each of the modelling tasks listed. Moreover, we investigate whether the influence of the hierarchy and the multiple labels carries over for ensemble models. For each of the tasks, we construct a single tree and two ensembles (random forest and bagging). The results reveal that the hierarchy and the multiple labels do help to obtain a better single tree model, while this is not preserved for the ensemble models.", "title": "" }, { "docid": "cb46b6331371cf3b790ba2b10539f70e", "text": "The problem of matching measured latitude/longitude points to roads is becoming increasingly important. This paper describes a novel, principled map matching algorithm that uses a Hidden Markov Model (HMM) to find the most likely road route represented by a time-stamped sequence of latitude/longitude pairs. The HMM elegantly accounts for measurement noise and the layout of the road network. We test our algorithm on ground truth data collected from a GPS receiver in a vehicle. Our test shows how the algorithm breaks down as the sampling rate of the GPS is reduced. We also test the effect of increasing amounts of additional measurement noise in order to assess how well our algorithm could deal with the inaccuracies of other location measurement systems, such as those based on WiFi and cell tower multilateration. We provide our GPS data and road network representation as a standard test set for other researchers to use in their map matching work.", "title": "" }, { "docid": "9df6afad3843f4b0ef881fb9bcc68148", "text": "Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. In this paper, we propose a novel neural architecture featuring a shared decision module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension. To illustrate the approach, we present a novel agent, called Branching Dueling Q-Network (BDQ), as a branching variant of the Dueling Double Deep Q-Network (Dueling DDQN). We evaluate the performance of our agent on a set of challenging continuous control tasks. The empirical results show that the proposed agent scales gracefully to environments with increasing action dimensionality and indicate the significance of the shared decision module in coordination of the distributed action branches. Furthermore, we show that the proposed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic Policy Gradient (DDPG).", "title": "" }, { "docid": "5eadbd77422c906c8c3b651a2041ccbd", "text": "The fifth edition of Computer Organization and Design-winner of a 2014 Textbook Excellence Award (Texty) from The Text and Academic Authors Association-moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This edition of mobile computing devices and switches copyright muze inc. President of stanford focuses on the text and content featuring tablet computers. Because an independent company in depth with updated and the computing material highlighting. 235 191 mm john, von neumann award. Because an was unlucky but after that the cloud computing. Because an understanding of the post pc era. Hennessy is the core i7 arm, cortex a8 and academic authors association highlights.", "title": "" }, { "docid": "e322a4f6d36ccc561b6b793ef85db9c2", "text": "Abdominal bracing is often adopted in fitness and sports conditioning programs. However, there is little information on how muscular activities during the task differ among the muscle groups located in the trunk and from those during other trunk exercises. The present study aimed to quantify muscular activity levels during abdominal bracing with respect to muscle- and exercise-related differences. Ten healthy young adult men performed five static (abdominal bracing, abdominal hollowing, prone, side, and supine plank) and five dynamic (V- sits, curl-ups, sit-ups, and back extensions on the floor and on a bench) exercises. Surface electromyogram (EMG) activities of the rectus abdominis (RA), external oblique (EO), internal oblique (IO), and erector spinae (ES) muscles were recorded in each of the exercises. The EMG data were normalized to those obtained during maximal voluntary contraction of each muscle (% EMGmax). The % EMGmax value during abdominal bracing was significantly higher in IO (60%) than in the other muscles (RA: 18%, EO: 27%, ES: 19%). The % EMGmax values for RA, EO, and ES were significantly lower in the abdominal bracing than in some of the other exercises such as V-sits and sit-ups for RA and EO and back extensions for ES muscle. However, the % EMGmax value for IO during the abdominal bracing was significantly higher than those in most of the other exercises including dynamic ones such as curl-ups and sit-ups. These results suggest that abdominal bracing is one of the most effective techniques for inducing a higher activation in deep abdominal muscles, such as IO muscle, even compared to dynamic exercises involving trunk flexion/extension movements. Key PointsTrunk muscle activities during abdominal bracing was examined with regard to muscle- and exercise-related differences.Abdominal bracing preferentially activates internal oblique muscles even compared to dynamic exercises involving trunk flexion/extension movements.Abdominal bracing should be included in exercise programs when the goal is to improve spine stability.", "title": "" }, { "docid": "0c67afcb351c53c1b9e2b4bcf3b0dc08", "text": "The Scrum methodology is an agile software development process that works as a project management wrapper around existing engineering practices to iteratively and incrementally develop software. With Scrum, for a developer to receive credit for his or her work, he or she must demonstrate the new functionality provided by a feature at the end of each short iteration during an iteration review session. Such a short-term focus without the checks and balances of sound engineering practices may lead a team to neglect quality. In this paper we present the experiences of three teams at Microsoft using Scrum with an additional nine sound engineering practices. Our results indicate that these teams were able to improve quality, productivity, and estimation accuracy through the combination of Scrum and nine engineering practices.", "title": "" }, { "docid": "c71a8c9163d6bf294a5224db1ff5c6f5", "text": "BACKGROUND\nOsteosarcoma is the second most common primary tumor of the skeletal system and the most common primary bone tumor. Usually occurring at the metaphysis of long bones, osteosarcomas are highly aggressive lesions that comprise osteoid-producing spindle cells. Craniofacial osteosarcomas comprise <8% and are believed to be less aggressive and lower grade. Primary osteosarcomas of the skull and skull base comprise <2% of all skull tumors. Osteosarcomas originating from the clivus are rare. We present a case of a primar, high-grade clival osteosarcoma.\n\n\nCASE DESCRIPTION\nA 29-year-old man presented to our institution with a progressively worsening right frontal headache for 3 weeks. There were no sensory or cranial nerve deficits. Computed tomography revealed a destructive mass involving the clivus with extension into the left sphenoid sinus. Magnetic resonance imaging revealed a homogenously enhancing lesion measuring 2.7 × 2.5 × 3.2 cm. The patient underwent endonasal transphenoidal surgery for gross total resection. The histopathologic analysis revealed proliferation of malignant-appearing spindled and epithelioid cells with associated osteoclast-like giant cells and a small area of osteoid production. The analysis was consistent with high-grade osteosarcoma. The patient did well and was discharged on postoperative day 2. He was referred for adjuvant radiation therapy and chemotherapy. Two-year follow-up showed postoperative changes and clival expansion caused by packing material.\n\n\nCONCLUSIONS\nOsteosarcoma is a highly malignant neoplasm. These lesions are usually found in the extremities; however, they may rarely present in the craniofacial region. Clival osteosarcomas are relatively infrequent. We present a case of a primary clival osteosarcoma with high-grade pathology.", "title": "" }, { "docid": "b4d92c6573f587c60d135b8fa579aade", "text": "Knowing the structure of criminal and terrorist networks could provide the technical insight needed to disrupt their activities.", "title": "" }, { "docid": "9cc8d5f395a11ceaabdf9b2e57aa2bc9", "text": "This paper proposes a Model Predictive Control methodology for a non-inverting Buck-Boost DC-DC converter for its efficient control. PID and MPC control strategies are simulated for the control of Buck-Boost converter and its performance is compared using MATLAB Simulink model. MPC shows better performance compared to PID controller. Output follows reference voltage more accurately showing that MPC can handle the dynamics of the system efficiently. The proposed methodology can be used for constant voltage applications. The control strategy can be implemented using a Field Programmable Gate Array (FPGA).", "title": "" }, { "docid": "fc54423c32cd0fd86d3f72b20bb11788", "text": "We construct two efficient Identity Based Encryption (IBE) systems that are selective identity secure without the random oracle model. Selective identity secure IBE is a slightly weaker security model than the standard security model for IBE. In this model the adversary must commit ahead of time to the identity that it intends to attack, whereas in the standard model the adversary is allowed to choose this identity adaptively. Our first secure IBE system extends to give a selective identity Hierarchical IBE secure without random oracles.", "title": "" }, { "docid": "ff59d1ec0c3eb11b3201e5708a585ca4", "text": "In this paper, we described our system for Knowledge Base Acceleration (KBA) Track at TREC 2013. The KBA Track has two tasks, CCR and SSF. Our approach consists of two major steps: selecting documents and extracting slot values. Selecting documents is to look for and save the documents that mention the entities of interest. The second step involves with generating seed patterns to extract the slot values and computing confidence score.", "title": "" } ]
scidocsrr
01a347689589ebb9a65937b2e7956c34
Dual Polarized Dual Antennas for 1.7–2.1 GHz LTE Base Stations
[ { "docid": "2cebd2fd12160d2a3a541989293f10be", "text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.", "title": "" } ]
[ { "docid": "ff36b5154e0b85faff09a5acbb39bb0a", "text": "During a frequent survey in the northwest Indian Himalayan region, a new species-Cordyceps macleodganensis-was encountered. This species is described on the basis of its macromorphological features, microscopic details, and internal transcribed spacer sequencing. This species showed only 90% resemblance to Cordyceps gracilis. The chemical composition of the mycelium showed protein (14.95 ± 0.2%) and carbohydrates (59.21 ± 3.8%) as the major nutrients. This species showed appreciable amounts of P-carotene, lycopene, phenolic compounds, polysaccharides, and flavonoids. Mycelial culture of this species showed higher effectiveness for ferric-reducing antioxidant power, DPPH radical scavenging activity, ferrous ion-chelating activity, and scavenging ability on superoxide anion-derived radicals, calculated by half-maximal effective concentrations.", "title": "" }, { "docid": "8eb96ae8116a16e24e6a3b60190cc632", "text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.", "title": "" }, { "docid": "3f06fc0b50a1de5efd7682b4ae9f5a46", "text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.", "title": "" }, { "docid": "4353dc9fb9d8228d4d6c38d5f94ce068", "text": "In this paper we generalize the quantum algorithm for computing short discrete logarithms previously introduced by Eker̊a [2] so as to allow for various tradeoffs between the number of times that the algorithm need be executed on the one hand, and the complexity of the algorithm and the requirements it imposes on the quantum computer on the other hand. Furthermore, we describe applications of algorithms for computing short discrete logarithms. In particular, we show how other important problems such as those of factoring RSA integers and of finding the order of groups under side information may be recast as short discrete logarithm problems. This immediately gives rise to an algorithm for factoring RSA integers that is less complex than Shor’s general factoring algorithm in the sense that it imposes smaller requirements on the quantum computer. In both our algorithm and Shor’s algorithm, the main hurdle is to compute a modular exponentiation in superposition. When factoring an n bit integer, the exponent is of length 2n bits in Shor’s algorithm, compared to slightly more than n/2 bits in our algorithm.", "title": "" }, { "docid": "d2761d58c3197817be0fa89cf6da62fb", "text": "The proper restraint of the destructive potential of the immune system is essential for maintaining health. Regulatory T (Treg) cells ensure immune homeostasis through their defining ability to suppress the activation and function of other leukocytes. The expression of the transcription factor forkhead box protein P3 (FOXP3) is a well-recognized characteristic of Treg cells, and FOXP3 is centrally involved in the establishment and maintenance of the Treg cell phenotype. In this Review, we summarize how the expression and activity of FOXP3 are regulated across multiple layers by diverse factors. The therapeutic implications of these topics for cancer and autoimmunity are also discussed.", "title": "" }, { "docid": "79729b8f7532617015cbbdc15a876a5c", "text": "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.", "title": "" }, { "docid": "060ba80e2f3aeef5a3a8d69a14005645", "text": "This paper presents an application of dynamically driven recurrent networks (DDRNs) in online electric vehicle (EV) battery analysis. In this paper, a nonlinear autoregressive with exogenous inputs (NARX) architecture of the DDRN is designed for both state of charge (SOC) and state of health (SOH) estimation. Unlike other techniques, this estimation strategy is subject to the global feedback theorem (GFT) which increases both computational intelligence and robustness while maintaining reasonable simplicity. The proposed technique requires no model or knowledge of battery's internal parameters, but rather uses the battery's voltage, charge/discharge currents, and ambient temperature variations to accurately estimate battery's SOC and SOH simultaneously. The presented method is evaluated experimentally using two different batteries namely lithium iron phosphate (<inline-formula> <tex-math notation=\"LaTeX\">$\\text{LiFePO}_4$</tex-math></inline-formula>) and lithium titanate (<inline-formula> <tex-math notation=\"LaTeX\">$\\text{LTO}$</tex-math></inline-formula>) both subject to dynamic charge and discharge current profiles and change in ambient temperature. Results highlight the robustness of this method to battery's nonlinear dynamic nature, hysteresis, aging, dynamic current profile, and parametric uncertainties. The simplicity and robustness of this method make it suitable and effective for EVs’ battery management system (BMS).", "title": "" }, { "docid": "a26d98c1f9cb219f85153e04120053a7", "text": "The purpose of this paper is to examine the academic and athletic motivation and identify the factors that determine the academic performance among university students in the Emirates of Dubai. The study examined motivation based on non-traditional measure adopting a scale to measure both academic as well as athletic motivation. Keywords-academic performance, academic motivation, athletic performance, university students, business management, academic achievement, career motivation, sports motivation", "title": "" }, { "docid": "19f96525e1e3dcc563a7b2138c8b1547", "text": "The state of the art in bidirectional search has changed significantly a very short time period; we now can answer questions about unidirectional and bidirectional search that until very recently we were unable to answer. This paper is designed to provide an accessible overview of the recent research in bidirectional search in the context of the broader efforts over the last 50 years. We give particular attention to new theoretical results and the algorithms they inspire for optimal and nearoptimal node expansions when finding a shortest path. Introduction and Overview Shortest path algorithms have a long history dating to Dijkstra’s algorithm (DA) (Dijkstra 1959). DA is the canonical example of a best-first search which prioritizes state expansions by their g-cost (distance from the start state). Historically, there were two enhancements to DA developed relatively quickly: bidirectional search and the use of heuristics. Nicholson (1966) suggested bidirectional search where the search proceeds from both the start and the goal simultaneously. In a two dimensional search space a search to radius r will visit approximately r states. A bidirectional search will perform two searches of approximately (r/2) states, a reduction of a factor of two. In exponential state spaces the reduction is from b to 2b, an exponential gain in both memory and time. This is illustrated in Figure 1, where the large circle represents a unidirectional search towards the goal, while the smaller circles represent the two parts of a bidirectional search. Just two years later, DA was independently enhanced with admissible heuristics (distance estimates to the goal) that resulted in the A* algorithm (Hart, Nilsson, and Raphael 1968). A* is goal directed – the search is focused towards the goal by the heuristic. This significantly reduces the search effort required to find a path to the goal. The obvious challenge was whether these two enhancements could be effectively combined into bidirectional heuristic search (Bi-HS). Pohl (1969) first addressed this challenge showing that in practice unidirectional heuristic search (Uni-HS) seemed to beat out Bi-HS. Many Bi-HS Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. algorithms were developed over the years (see a short survey below), but no such algorithm was shown to consistently outperform Uni-HS. Barker and Korf (2015) recently hypothesized that in most cases one should either use bidirectional brute-force search (Bi-BS) or Uni-HS (e.g. A*), but that Bi-HS is never the best approach. This work spurred further research into Bi-HS, and has lead to new theoretical understanding on the nature of Bi-HS as well as new Bi-HS algorithms (e.g., MM, fMM and NBS described below) with strong theoretical guarantees. The purpose of this paper is to provide a high-level picture of this new line of work while placing it in the larger context of previous work on bidirectional search. While there are still many questions yet to answer, we have, for the first time, the full suite of analytic tools necessary to determine whether bidirectional search will be useful on a given problem instance. This is coupled with a Bi-HS algorithm that is guaranteed to expand no more than twice the minimum number of the necessary state expansions in practice. With these tools we can illustrate use-cases for bidirectional search and point to areas of future research. Terminology and Background We define a shortest-path problem as a n-tuple (start, goal, expF , expB , hF , hB), where the goal is to find the least-cost path between start and goal in a graph G. G is not provided a priori, but is provided implicitly through the expF and expB functions that can expand and return the forward (backwards) successors of any state. Bidirectional search algorithms interleave two separate searches, a search forward from start and a search backward from goal. We use fF , gF and hF to indicate f -, g-, and h-costs in the forward search and fB , gB and hB similarly in the backward search. Likewise, OpenF and OpenB store states generated in the forward and backward directions, respectively. Finally, gminF , gminB , fminF and fminB denote the minimal gand f -values in OpenF and OpenB respectively. d(x, y) denotes the shortest distance between x and y. Front-to-end algorithms use two heuristic functions. The forward heuristic, hF , is forward admissible iff hF (u) ≤ d(u, goal) for all u in G and is forward consistent iff hF (u) ≤ d(u, u′) + hF (u′) for all u and u′ in G. The backward heuristic, hB , is backward admissible iff hB(v) ≤", "title": "" }, { "docid": "52a3cfb08e434560cd0638c682fca7de", "text": "This paper focuses on routing for vehicles getting access to infrastructure either directly or via multiple hops though other vehicles. We study Routing Protocol for Low power and lossy networks (RPL), a tree-based routing protocol designed for sensor networks. Many design elements from RPL are transferable to the vehicular environment. We provide a simulation performance study of RPL and RPL tuning in VANETs. More specifically, we seek to study the impact of RPL's various parameters and external factors (e.g., various timers and speeds) on its performance and obtain insights on RPL tuning for its use in VANETs.", "title": "" }, { "docid": "875e165e70000d15b11d724607be1917", "text": "Internet-based Chat environments such as Internet relay Chat and instant messaging pose a challenge for data mining and information retrieval systems due to the multi-threaded, overlapping nature of the dialog and the nonstandard usage of language. In this paper we present preliminary methods of topic detection and topic thread extraction that augment a typical TF-IDF-based vector space model approach with temporal relationship information between posts of the Chat dialog combined with WordNet hypernym augmentation. We show results that promise better performance than using only a TF-IDF bag-of-words vector space model.", "title": "" }, { "docid": "2049d654e8293ee3470834e3a9aeea5f", "text": "In this paper, we analyze the influence of Twitter users in sharing news articles that may affect the readers’ mood. We collected data of more than 2000 Twitter users who shared news articles from Corriere.it, a daily newspaper that provides mood metadata annotated by readers on a voluntary basis. We automatically annotated personality types and communication styles of Twitter users and analyzed the correlations between personality, communication style, Twitter metadata (such as followig and folllowers) and the type of mood associated to the articles they shared. We also run a feature selection task, to find the best predictors of positive and negative mood sharing, and a classification task. We automatically predicted positive and negative mood sharers with 61.7% F1-measure. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b633fbaab6e314535312709557ef1139", "text": "The purification of recombinant proteins by affinity chromatography is one of the most efficient strategies due to the high recovery yields and purity achieved. However, this is dependent on the availability of specific affinity adsorbents for each particular target protein. The diversity of proteins to be purified augments the complexity and number of specific affinity adsorbents needed, and therefore generic platforms for the purification of recombinant proteins are appealing strategies. This justifies why genetically encoded affinity tags became so popular for recombinant protein purification, as these systems only require specific ligands for the capture of the fusion protein through a pre-defined affinity tag tail. There is a wide range of available affinity pairs \"tag-ligand\" combining biological or structural affinity ligands with the respective binding tags. This review gives a general overview of the well-established \"tag-ligand\" systems available for fusion protein purification and also explores current unconventional strategies under development.", "title": "" }, { "docid": "ab4abd9033f87e08656f4363499bc09c", "text": "It is well known that, for most datasets, the use of large-size minibatches for Stochastic Gradient Descent (SGD) typically leads to slow convergence and poor generalization. On the other hand, large minibatches are of great practical interest as they allow for a better exploitation of modern GPUs. Previous literature on the subject concentrated on how to adjust the main SGD parameters (in particular, the learning rate) when using large minibatches. In this work we introduce an additional feature, that we call minibatch persistency, that consists in reusing the same minibatch for K consecutive SGD iterations. The computational conjecture here is that a large minibatch contains a significant sample of the training set, so one can afford to slightly overfitting it without worsening generalization too much. The approach is intended to speedup SGD convergence, and also has the advantage of reducing the overhead related to data loading on the internal GPU memory. We present computational results on CIFAR-10 with an AlexNet architecture, showing that even small persistency values (K = 2 or 5) already lead to a significantly faster convergence and to a comparable (or even better) generalization than the standard “disposable minibatch” approach (K = 1), in particular when large minibatches are used. The lesson learned is that minibatch persistency can be a simple yet effective way to deal with large minibatches.", "title": "" }, { "docid": "20710cf5fac30800217c5b9568d3541a", "text": "BACKGROUND\nAcne scarring is treatable by a variety of modalities. Ablative carbon dioxide laser (ACL), while effective, is associated with undesirable side effect profiles. Newer modalities using the principles of fractional photothermolysis (FP) produce modest results than traditional carbon dioxide (CO(2)) lasers but with fewer side effects. A novel ablative CO(2) laser device use a technique called ablative fractional resurfacing (AFR), combines CO(2) ablation with a FP system. This study was conducted to compare the efficacy of Q-switched 1064-nm Nd: YAG laser and that of fractional CO(2) laser in the treatment of patients with moderate to severe acne scarring.\n\n\nMETHODS\nSixty four subjects with moderate to severe facial acne scars were divided randomly into two groups. Group A received Q-Switched 1064-nm Nd: YAG laser and group B received fractional CO(2) laser. Two groups underwent four session treatment with laser at one month intervals. Results were evaluated by patients based on subjective satisfaction and physicians' assessment and photo evaluation by two blinded dermatologists. Assessments were obtained at baseline and at three and six months after final treatment.\n\n\nRESULTS\nPost-treatment side effects were mild and transient in both groups. According to subjective satisfaction (p = 0.01) and physicians' assessment (p < 0.001), fractional CO(2) laser was significantly more effective than Q- Switched 1064- nm Nd: YAG laser.\n\n\nCONCLUSIONS\nFractional CO2 laser has the most significant effect on the improvement of atrophic facial acne scars, compared with Q-Switched 1064-nm Nd: YAG laser.", "title": "" }, { "docid": "7f05bd51c98140417ff73ec2d4420d6a", "text": "An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc.", "title": "" }, { "docid": "b8ac61e2026f3dd7e775d440dcb43772", "text": "This paper presents a design methodology of a highly efficient power link based on Class-E driven, inductively coupled coil pair. An optimal power link design for retinal prosthesis and/or other implants must take into consideration the allowable safety limits of magnetic fields, which in turn govern the inductances of the primary and secondary coils. In retinal prosthesis, the optimal coil inductances have to deal with the constraints of the coil sizes, the tradeoffs between the losses, H-field limitation and dc supply voltage required by the Class-E driver. Our design procedure starts with the formation of equivalent circuits, followed by the analysis of the loss of the rectifier and coils and the H-field for induced voltage and current. Both linear and nonlinear models for the analysis are presented. Based on the procedure, an experimental power link is implemented with an overall efficiency of 67% at the optimal distance of 7 mm between the coils. In addition to the coil design methodology, we are also presenting a closed-loop control of Class-E amplifier for any duty cycle and any value of the systemQ.", "title": "" }, { "docid": "b53bd3f4a0d8933d9af0f5651a445800", "text": "Requirements for implemented system can be extracted and reused for a production of a new similar system. Extraction of common and variable features from requirements leverages the benefits of the software product lines engineering (SPLE). Although various approaches have been proposed in feature extractions from natural language (NL) requirements, no related literature review has been published to date for this topic. This paper provides a systematic literature review (SLR) of the state-of-the-art approaches in feature extractions from NL requirements for reuse in SPLE. We have included 13 studies in our synthesis of evidence and the results showed that hybrid natural language processing approaches were found to be in common for overall feature extraction process. A mixture of automated and semi-automated feature clustering approaches from data mining and information retrieval were also used to group common features, with only some approaches coming with support tools. However, most of the support tools proposed in the selected studies were not made available publicly and thus making it hard for practitioners’ adoption. As for the evaluation, this SLR reveals that not all studies employed software metrics as ways to validate experiments and case studies. Finally, the quality assessment conducted confirms that practitioners’ guidelines were absent in the selected studies. © 2015 Elsevier Inc. All rights reserved. c t t t r c S o r ( l w t r t", "title": "" }, { "docid": "90125582272e3f16a34d5d0c885f573a", "text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.", "title": "" }, { "docid": "03614f11b2b6800384e229c37967030d", "text": "Data Analytics is widely used in many industries and organization to make a better Business decision. By applying analytics to the structured and unstructured data the enterprises brings a great change in their way of planning and decision making. Sentiment analysis (or) opinion mining plays a significant role in our daily decision making process. These decisions may range from purchasing a product such as mobile phone to reviewing the movie to making investments — all the decisions will have a huge impact on the daily life. Sentiment Analysis is dealing with various issues such as Polarity Shift, accuracy related issues, Binary Classification problem and Data sparsity problem. However various methods were introduced for performing sentiment analysis, still that are not efficient in extracting the sentiment features from the given content of text. Naive Bayes, Support Vector Machine, Maximum Entropy are the machine learning algorithms used for sentiment analysis which has only a limited sentiment classification category ranging between positive and negative. Especially supervised and unsupervised algorithms have only limited accuracy in handling polarity shift and binary classification problem. Even though the advancement in sentiment Analysis technique there are various issues still to be noticed and make the analysis not accurately and efficiently. So this paper presents the survey on various sentiment Analysis methodologies and approaches in detailed. This will be helpful to earn clear knowledge about sentiment analysis methodologies. At last the comparison is made between various paper's approach and issues addressed along with the metrics used.", "title": "" } ]
scidocsrr
4fa41696b7aea8fcce8b2cc93d7d85c2
Planning human-aware motions using a sampling-based costmap planner
[ { "docid": "b86dd4b34965b15af417da275de761c4", "text": "This article considered the problem of designing joint-actuation mechanisms that can allow fast and accurate operation of a robot arm, while guaranteeing a suitably limited level of injury risk. Different approaches to the problem were presented, and a method of performance evaluation was proposed based on minimum-time optimal control with safety constraints. The variable stiffness transmission (VST) scheme was found to be one of a few different possible schemes that allows the most flexibility and potential performance. Some aspects related to the implementation of the mechanics and control of VST actuation were also reported.", "title": "" } ]
[ { "docid": "53b48550158b06dfbdb8c44a4f7241c6", "text": "The primary aim of the study was to examine the relationship between media exposure and body image in adolescent girls, with a particular focus on the ‘new’ and as yet unstudied medium of the Internet. A sample of 156 Australian female high school students (mean age= 14.9 years) completed questionnaire measures of media consumption and body image. Internet appearance exposure and magazine reading, but not television exposure, were found to be correlated with greater internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. Regression analyses indicated that the effects of magazines and Internet exposure were mediated by internalization and appearance comparison. It was concluded that the Internet represents a powerful sociocultural influence on young women’s lives.", "title": "" }, { "docid": "7f3c6e8f0915160bbc9feba4d2175fb3", "text": "Memory leaks are major problems in all kinds of applications, depleting their performance, even if they run on platforms with automatic memory management, such as Java Virtual Machine. In addition, memory leaks contribute to software aging, increasing the complexity of software maintenance. So far memory leak detection was considered to be a part of development process, rather than part of software maintenance. To detect slow memory leaks as a part of quality assurance process or in production environments statistical approach for memory leak detection was implemented and deployed in a commercial tool called Plumbr. It showed promising results in terms of leak detection precision and recall, however, even better detection quality was desired. To achieve this improvement goal, classification algorithms were applied to the statistical data, which was gathered from customer environments where Plumbr was deployed. This paper presents the challenges which had to be solved, method that was used to generate features for supervised learning and the results of the corresponding experiments.", "title": "" }, { "docid": "97bcae9e2ca08038a82c9c46b717cd4f", "text": "The Internet of Things (IoT) networks are vulnerable to various kinds of attacks, being the sinkhole attack one of the most destructive since it prevents communication among network devices. In general, existing solutions are not effective to provide protection and security against attacks sinkhole on IoT, and they also introduce high consumption of resources de memory, storage and processing. Further, they do not consider the impact of device mobility, which in essential in urban scenarios, like smart cities. This paper proposes an intrusion detection system, called INTI (Intrusion detection of SiNkhole attacks on 6LoWPAN for InterneT of ThIngs), to identify sinkhole attacks on the routing services in IoT. Moreover, INTI aims to mitigate adverse effects found in IDS that disturb its performance, like false positive and negative, as well as the high resource cost. The system combines watchdog, reputation and trust strategies for detection of attackers by analyzing the behavior of devices. Results show the INTI performance and its effectiveness in terms of attack detection rate, number of false positives and false negatives.", "title": "" }, { "docid": "c6e1c8aa6633ec4f05240de1a3793912", "text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.", "title": "" }, { "docid": "3033ef7f981399614efc45c62b1ac475", "text": "This paper describes an integrated system architecture for automotive electronic systems based on multicore systems-on-chips (SoCs). We integrate functions from different suppliers into a few powerful electronic control units using a dedicated core for each function. This work is fueled by technological opportunities resulting from recent advances in the semiconductor industry and the challenges of providing dependable automotive electronic systems at competitive costs. The presented architecture introduces infrastructure IP cores to overcome key challenges in moving to automotive multicore SoCs: a time-triggered network-on-a-chip with fault isolation for the interconnection of functional IP cores, a diagnostic IP core for error detection and state recovery, a gateway IP core for interfacing legacy systems, and an IP core for reconfiguration. This paper also outlines the migration from today's federated architectures to the proposed integrated architecture using an exemplary automotive E/E system.", "title": "" }, { "docid": "f958c7d3d27ee79c9dee944716139025", "text": "We present a tunable flipflop-based frequency divider and a fully differential push-push VCO designed in a 200GHz fT SiGe BiCMOS technology. A new technique for tuning the sensitivity of the divider in the frequency range of interest is presented. The chip works from 60GHz up to 113GHz. The VCO is based on a new topology which allows generating differential push-push outputs. The VCO shows a tuning range larger than 7GHz. The phase noise is 75dBc/Hz at 100kHz offset. The chip shows a frequency drift of 12.3MHz/C. The fundamental signal suppression is larger than 50dB. The output power is 2×5dBm. At a 3.3V supply, the circuits consume 35mA and 65mA, respectively.", "title": "" }, { "docid": "bfb5186d9404c590dd74586ae27ebb29", "text": "The idea of having a wireless PROFIBUS is appealing, since this can bring benefits like reduced cabling need and mobile stations to the factory floor. But unfortunately wireless transmission is error-prone, which affects the timeliness and reliability behavior users expect from a fieldbus system (hard real-time). In this paper we compare two different approaches for the medium access control and link layer of a wireless PROFIBUS system with respect to their so-called real-time performance in the presence of transmission errors. Specifically, we compare the existing PROFIBUS MAC and link layer protocol with a simple round-robin protocol. It shows up that round-robin delivers significantly better real-time performance than the PROFIBUS protocol under bursty error conditions. In a second step we propose three add-ons to round-robin and we show that they further increase the real-time performance of round-robin. The add-ons take certain characteristics of the wireless medium into account. Keywords— PROFIBUS, Wireless PROFIBUS, Real-Time Performance, MAC Protocols, Polling", "title": "" }, { "docid": "5ae415a28817c2bb774989b55e2f68b3", "text": "Many applications of unmanned aerial vehicles (UAVs) require the capability to navigate to some goal and to perform precise and safe landing. In this paper, we present a visual navigation system as an alternative pose estimation method for environments and situations in which GPS is unavailable. The developed visual odometer is an incremental procedure that estimates the vehicle's ego-motion by extracting and tracking visual features, using an onboard camera. For more robustness and accuracy, the visual estimates are fused with measurements from an Inertial Measurement Unit (IMU) and a Pressure Sensor Altimeter (PSA) in order to provide accurate estimates of the vehicle's height, velocity and position relative to a given location. These estimates are then exploited by a nonlinear hierarchical controller for achieving various navigation tasks such as take-off, landing, hovering, target tracking, etc. In addition to the odometer description, the paper presents validation results from autonomous flights using a small quadrotor UAV.", "title": "" }, { "docid": "3a709dd22392905d05fd4d737597ad4d", "text": "Lung cancer is the most common cancer that cannot be ignored and cause death with late health care. Currently, CT can be used to help doctors detect the lung cancer in the early stages. In many cases, the diagnosis of identifying the lung cancer depends on the experience of doctors, which may ignore some patients and cause some problems. Deep learning has been proved as a popular and powerful method in many medical imaging diagnosis areas. In this paper, three types of deep neural networks (e.g., CNN, DNN, and SAE) are designed for lung cancer calcification. Those networks are applied to the CT image classification task with some modification for the benign and malignant lung nodules. Those networks were evaluated on the LIDC-IDRI database. The experimental results show that the CNN network archived the best performance with an accuracy of 84.15%, sensitivity of 83.96%, and specificity of 84.32%, which has the best result among the three networks.", "title": "" }, { "docid": "cff671af6a7a170fac2daf6acd9d1e3e", "text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.", "title": "" }, { "docid": "ed82ac5cf6cf4173fde52a25c17b86aa", "text": "The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.", "title": "" }, { "docid": "60609a5a76e9fdb6b4771774d916b312", "text": "Multimedia on demand (MOD) is an interactive system that provides a number of value-added services in addition to traditional TV services, such as video on demand and interactive online learning. This opens a new marketing and managerial problem for the telecommunication industry to retain valuable MOD customers. Data mining techniques have been widely applied to develop customer churn prediction models, such as neural networks and decision trees in the domain of mobile telecommunication. However, much related work focuses on developing the prediction models per se. Few studies consider the pre-processing step during data mining whose aim is to filter out unrepresentative data or information. This paper presents the important processes of developing MOD customer churn prediction models by data mining techniques. They contain the pre-processing stage for selecting important variables by association rules, which have not been applied before, the model construction stage by neural networks (NN) and decision trees (DT), which are widely adapted in the literature, and four evaluation measures including prediction accuracy, precision, recall, and F-measure, all of which have not been considered to examine the model performance. The source data are based on one telecommunication company providing the MOD services in Taiwan, and the experimental results show that using association rules allows the DT and NN models to provide better prediction performances over a chosen validation dataset. In particular, the DT model performs better than the NN model. Moreover, some useful and important rules in the DT model, which show the factors affecting a high proportion of customer churn, are also discussed for the marketing and managerial purpose. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "09605b9eef9c02ee01088d6688519a60", "text": "People endorse the great power of cloud computing, but cannot fully trust the cloud providers to host privacy-sensitive data, due to the absence of user-to-cloud controllability. To ensure confidentiality, data owners outsource encrypted data instead of plaintexts. To share the encrypted files with other users, ciphertext-policy attribute-based encryption (CP-ABE) can be utilized to conduct fine-grained and owner-centric access control. But this does not sufficiently become secure against other attacks. Many previous schemes did not grant the cloud provider the capability to verify whether a downloader can decrypt. Therefore, these files should be available to everyone accessible to the cloud storage. A malicious attacker can download thousands of files to launch economic denial of sustainability (EDoS) attacks, which will largely consume the cloud resource. The payer of the cloud service bears the expense. Besides, the cloud provider serves both as the accountant and the payee of resource consumption fee, lacking the transparency to data owners. These concerns should be resolved in real-world public cloud storage. In this paper, we propose a solution to secure encrypted cloud storages from EDoS attacks and provide resource consumption accountability. It uses CP-ABE schemes in a black-box manner and complies with arbitrary access policy of the CP-ABE. We present two protocols for different settings, followed by performance and security analysis.", "title": "" }, { "docid": "c84ef3f7dfa5e3219a6c1c2f98109651", "text": "We present JetStream, a system that allows real-time analysis of large, widely-distributed changing data sets. Traditional approaches to distributed analytics require users to specify in advance which data is to be backhauled to a central location for analysis. This is a poor match for domains where available bandwidth is scarce and it is infeasible to collect all potentially useful data. JetStream addresses bandwidth limits in two ways, both of which are explicit in the programming model. The system incorporates structured storage in the form of OLAP data cubes, so data can be stored for analysis near where it is generated. Using cubes, queries can aggregate data in ways and locations of their choosing. The system also includes adaptive filtering and other transformations that adjusts data quality to match available bandwidth. Many bandwidth-saving transformations are possible; we discuss which are appropriate for which data and how they can best be combined. We implemented a range of analytic queries on web request logs and image data. Queries could be expressed in a few lines of code. Using structured storage on source nodes conserved network bandwidth by allowing data to be collected only when needed to fulfill queries. Our adaptive control mechanisms are responsive enough to keep end-to-end latency within a few seconds, even when available bandwidth drops by a factor of two, and are flexible enough to express practical policies.", "title": "" }, { "docid": "2f88356c3a1ab60e3dd084f7d9630c70", "text": "Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to \"translate'' user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.", "title": "" }, { "docid": "4054797603b65bc694bf239c1fbfb96a", "text": "Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of market-based provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.", "title": "" }, { "docid": "6a4161d4badb82fbac04dacafbeea6c0", "text": "Physiological and anatomical findings in the primate visual system, as well as clinical evidence in humans, suggest that different components of visual information processing are segregated into largely independent parallel pathways. Such a segregation leads to certain predictions about human vision. In this paper we describe psychophysical experiments on the interactions of color, form, depth, and movement in human perception, and we attempt to correlate these aspects of visual perception with the different subdivisions of the visual system.", "title": "" }, { "docid": "ac41c57bcb533ab5dabcc733dd69a705", "text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.", "title": "" }, { "docid": "9af703a47d382926698958fba88c1e1a", "text": "Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non-security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.", "title": "" }, { "docid": "04c60b1bc04886086382402e9c14717d", "text": "This paper proposes a novel robust and adaptive sliding-mode (SM) control for a cascaded two-level inverter (CTLI)-based grid-connected photovoltaic (PV) system. The modeling and design of the control scheme for the CTLI-based grid-connected PV system is developed to supply active power and reactive power with variable solar irradiance. A vector controller is developed, keeping the maximum power delivery of the PV in consideration. Two different switching schemes have been considered to design SM controllers and studied under similar operating situations. Instead of the referred space vector pulsewidth modulation (PWM) technique, a simple PWM modulation technique is used for the operation of the proposed SM controller. The performance of the SM controller is improved by using an adaptive hysteresis band calculation. The controller performance is found to be satisfactory for both the schemes at considered load and solar irradiance level variations in simulation environment. The laboratory prototype, operated with the proposed controller, is found to be capable of implementing the control algorithm successfully in the considered situation.", "title": "" } ]
scidocsrr
1d98b8644cdf9a4d8002019c30e054a1
Short text classification by detecting information path
[ { "docid": "fe3029a9e54f068a1387014778c1128d", "text": "We propose a simple, scalable, and non-parametric approach for short text classification. Leveraging the well studied and scalable Information Retrieval (IR) framework, our approach mimics human labeling process for a piece of short text. It first selects the most representative and topical-indicative words from a given short text as query words, and then searches for a small set of labeled short texts best matching the query words. The predicted category label is the majority vote of the search results. Evaluated on a collection of more than 12K Web snippets, the proposed approach achieves comparable classification accuracy with the baseline Maximum Entropy classifier using as few as 3 query words and top-5 best matching search hits. Among the four query word selection schemes proposed and evaluated in our experiments, term frequency together with clarity gives the best classification accuracy.", "title": "" }, { "docid": "95689f439fababe920921ee419965b90", "text": "In traditional text clustering methods, documents are represented as \"bags of words\" without considering the semantic information of each document. For instance, if two documents use different collections of core words to represent the same topic, they may be falsely assigned to different clusters due to the lack of shared core words, although the core words they use are probably synonyms or semantically associated in other forms. The most common way to solve this problem is to enrich document representation with the background knowledge in an ontology. There are two major issues for this approach: (1) the coverage of the ontology is limited, even for WordNet or Mesh, (2) using ontology terms as replacement or additional features may cause information loss, or introduce noise. In this paper, we present a novel text clustering method to address these two issues by enriching document representation with Wikipedia concept and category information. We develop two approaches, exact match and relatedness-match, to map text documents to Wikipedia concepts, and further to Wikipedia categories. Then the text documents are clustered based on a similarity metric which combines document content information, concept information as well as category information. The experimental results using the proposed clustering framework on three datasets (20-newsgroup, TDT2, and LA Times) show that clustering performance improves significantly by enriching document representation with Wikipedia concepts and categories.", "title": "" }, { "docid": "639bbe7b640c514ab405601c7c3cfa01", "text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.", "title": "" }, { "docid": "e59d1a3936f880233001eb086032d927", "text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.", "title": "" }, { "docid": "3bee61e95acf274c01f1846233b3c3bb", "text": "One key difficulty with text classification learning algorithms is that they require many hand-labeled examples to learn accurately. This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers. By assuming that documents are created by a parametric generative model, Expectation-Maximization (EM) finds local maximum a posteriori models and classifiers from all the data—labeled and unlabeled. These generative models do not capture all the intricacies of text; however on some domains this technique substantially improves classification accuracy, especially when labeled data are sparse. Two problems arise from this basic approach. First, unlabeled data can hurt performance in domains where the generative modeling assumptions are too strongly violated. In this case the assumptions can be made more representative in two ways: by modeling sub-topic class structure, and by modeling super-topic hierarchical class relationships. By doing so, model probability and classification accuracy come into correspondence, allowing unlabeled data to improve classification performance. The second problem is that even with a representative model, the improvements given by unlabeled data do not sufficiently compensate for a paucity of labeled data. Here, limited labeled data provide EM initializations that lead to low-probability models. Performance can be significantly improved by using active learning to select high-quality initializations, and by using alternatives to EM that avoid low-probability local maxima.", "title": "" } ]
[ { "docid": "9948ebbd2253021e3af53534619c5094", "text": "This paper presents a novel method to simultaneously estimate the clothed and naked 3D shapes of a person. The method needs only a single photograph of a person wearing clothing. Firstly, we learn a deformable model of human clothed body shapes from a database. Then, given an input image, the deformable model is initialized with a few user-specified 2D joints and contours of the person. And the correspondence between 3D shape and 2D contours is established automatically. Finally, we optimize the parameters of the deformable model in an iterative way, and then obtain the clothed and naked 3D shapes of the person simultaneously. The experimental results on real images demonstrate the effectiveness of our method.", "title": "" }, { "docid": "629f6ab006700e5bc6b5a001a4d925e5", "text": "Model predictive control (MPC) is an effective method for controlling robotic systems, particularly autonomous aerial vehicles such as quadcopters. However, application of MPC can be computationally demanding, and typically requires estimating the state of the system, which can be challenging in complex, unstructured environments. Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is difficult to apply to unstable systems that are liable to fail catastrophically during training before an effective policy has been found. We propose to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment. This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicle's onboard sensors. After training, the neural network policy can successfully control the robot without knowledge of the full state, and at a fraction of the computational cost of MPC. We evaluate our method by learning obstacle avoidance policies for a simulated quadrotor, using simulated onboard sensors and no explicit state estimation at test time.", "title": "" }, { "docid": "fc7efee1840ef385537f1686859da87c", "text": "The self-oscillating converter is a popular circuit for cost-sensitive applications due to its simplicity and low component count. It is widely employed in mobile phone charges and as the stand-by power source in offline power supplies for data-processing equipment. However, this circuit almost was not explored for supplier Power LEDs. This paper presents a self-oscillating buck power electronics driver for supply directly Power LEDs, with no additional circuit. A simplified mathematical model of LED was used to characterize the self-oscillating converter for the power LED driver. In order to improve the performance of the proposed buck converter in this work the control of the light intensity of LEDs was done using a microcontroller to emulate PWM modulation with frequency 200 Hz. At using the converter proposed the effects of the LED manufacturing tolerances and drifts over temperature almost has no influence on the LED average current.", "title": "" }, { "docid": "07ffe189312da8519c4a6260402a0b22", "text": "Computational social science is an emerging research area at the intersection of computer science, statistics, and the social sciences, in which novel computational methods are used to answer questions about society. The field is inherently collaborative: social scientists provide vital context and insight into pertinent research questions, data sources, and acquisition methods, while statisticians and computer scientists contribute expertise in developing mathematical models and computational tools. New, large-scale sources of demographic, behavioral, and network data from the Internet, sensor networks, and crowdsourcing systems augment more traditional data sources to form the heart of this nascent discipline, along with recent advances in machine learning, statistics, social network analysis, and natural language processing. The related research area of social computing deals with the mechanisms through which people interact with computational systems, examining questions such as how and why people contribute user-generated content and how to design systems that better enable them to do so. Examples of social computing systems include prediction markets, crowdsourcing markets, product review sites, and collaboratively edited wikis, all of which encapsulate some notion of aggregating crowd wisdom, beliefs, or ideas—albeit in different ways. Like computational social science, social computing blends techniques from machine learning and statistics with ideas from the social sciences. For example, the economics literature on incentive design has been especially influential.", "title": "" }, { "docid": "53e6216c2ad088dfcf902cc0566072c6", "text": "The floating photovoltaic system is a new concept in energy technology to meet the needs of our time. The system integrates existing land based photovoltaic technology with a newly developed floating photovoltaic technology. Because module temperature of floating PV system is lower than that of overland PV system, the floating PV system has 11% better generation efficiency than overland PV system. In the thesis, superiority of floating PV system is verified through comparison analysis of generation amount by 2.4kW, 100kW and 500kW floating PV system installed by K-water and the cause of such superiority was analyzed. Also, effect of wind speed, and waves on floating PV system structure was measured to analyze the effect of the environment on floating PV system generation efficiency.", "title": "" }, { "docid": "f0e22717207ed3bc013d09db3edc337c", "text": "The bag-of-words model is one of the most popular representation methods for object categorization. The key idea is to quantize each extracted key point into one of visual words, and then represent each image by a histogram of the visual words. For this purpose, a clustering algorithm (e.g., K-means), is generally used for generating the visual words. Although a number of studies have shown encouraging results of the bag-of-words representation for object categorization, theoretical studies on properties of the bag-of-words model is almost untouched, possibly due to the difficulty introduced by using a heuristic clustering process. In this paper, we present a statistical framework which generalizes the bag-of-words representation. In this framework, the visual words are generated by a statistical process rather than using a clustering algorithm, while the empirical performance is competitive to clustering-based method. A theoretical analysis based on statistical consistency is presented for the proposed framework. Moreover, based on the framework we developed two algorithms which do not rely on clustering, while achieving competitive performance in object categorization when compared to clustering-based bag-of-words representations.", "title": "" }, { "docid": "eff407fb0d45ebeea3d5965b7b5df14b", "text": "In order to develop intelligent systems that attain the trust of their users, it is important to understand how users perceive such systems and develop those perceptions over time. We present an investigation into how users come to understand an intelligent system as they use it in their daily work. During a six-week field study, we interviewed eight office workers regarding the operation of a system that predicted their managers' interruptibility, comparing their mental models to the actual system model. Our results show that by the end of the study, participants were able to discount some of their initial misconceptions about what information the system used for reasoning about interruptibility. However, the overarching structures of their mental models stayed relatively stable over the course of the study. Lastly, we found that participants were able to give lay descriptions attributing simple machine learning concepts to the system despite their lack of technical knowledge. Our findings suggest an appropriate level of feedback for user interfaces of intelligent systems, provide a baseline level of complexity for user understanding, and highlight the challenges of making users aware of sensed inputs for such systems.", "title": "" }, { "docid": "64c2b9f59a77f03e6633e5804356e9fc", "text": "AbstructWe present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (Le., two extra disks) is based on ReedSolomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error.", "title": "" }, { "docid": "4277894ef2bf88fd3a78063a8b0cc7fe", "text": "This paper deals with a design method of LCL filter for grid-connected three-phase PWM voltage source inverters (VSI). By analyzing the total harmonic distortion of the current (THDi) in the inverter-side inductor and the ripple attenuation factor of the current (RAF) injected to the grid through the LCL network, the parameter of LCL can be clearly designed. The described LCL filter design method is verified by showing a good agreement between the target current THD and the actual one through simulation and experiment.", "title": "" }, { "docid": "969ba9848fa6d02f74dabbce2f1fe3ab", "text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.", "title": "" }, { "docid": "a91a57326a2d961e24d13b844a3556cf", "text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.", "title": "" }, { "docid": "3d20ba5dc32270cb75df7a2d499a70e4", "text": "The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBOOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by “boosting” in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion.", "title": "" }, { "docid": "1d5e363647bd8018b14abfcc426246bb", "text": "This paper presents a new approach to improve the performance of finger-vein identification systems presented in the literature. The proposed system simultaneously acquires the finger-vein and low-resolution fingerprint images and combines these two evidences using a novel score-level combination strategy. We examine the previously proposed finger-vein identification approaches and develop a new approach that illustrates it superiority over prior published efforts. The utility of low-resolution fingerprint images acquired from a webcam is examined to ascertain the matching performance from such images. We develop and investigate two new score-level combinations, i.e., holistic and nonlinear fusion, and comparatively evaluate them with more popular score-level fusion approaches to ascertain their effectiveness in the proposed system. The rigorous experimental results presented on the database of 6264 images from 156 subjects illustrate significant improvement in the performance, i.e., both from the authentication and recognition experiments.", "title": "" }, { "docid": "5a7e97c755e29a9a3c82fc3450f9a929", "text": "Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that enables secure execution of a program in an isolated environment, called an enclave. SGX hardware protects the running enclave against malicious software, including the operating system, hypervisor, and even low-level firmware. This strong security property allows trustworthy execution of programs in hostile environments, such as a public cloud, without trusting anyone (e.g., a cloud provider) between the enclave and the SGX hardware. However, recent studies have demonstrated that enclave programs are vulnerable to accurate controlled-channel attacks conducted by a malicious OS. Since enclaves rely on the underlying OS, curious and potentially malicious OSs can observe a sequence of accessed addresses by intentionally triggering page faults. In this paper, we propose T-SGX, a complete mitigation solution to the controlled-channel attack in terms of compatibility, performance, and ease of use. T-SGX relies on a commodity component of the Intel processor (since Haswell), called Transactional Synchronization Extensions (TSX), which implements a restricted form of hardware transactional memory. As TSX is implemented as an extension (i.e., snooping the cache protocol), any unusual event, such as an exception or interrupt, that should be handled in its core component, results in an abort of the ongoing transaction. One interesting property is that the TSX abort suppresses the notification of errors to the underlying OS. This means that the OS cannot know whether a page fault has occurred during the transaction. T-SGX, by utilizing this property of TSX, can carefully isolate the effect of attempts to tap running enclaves, thereby completely eradicating the known controlledchannel attack. We have implemented T-SGX as a compiler-level scheme to automatically transform a normal enclave program into a secured enclave program without requiring manual source code modification or annotation. We not only evaluate the security properties of T-SGX, but also demonstrate that it could be applied to all the previously demonstrated attack targets, such as libjpeg, Hunspell, and FreeType. To evaluate the performance of T-SGX, we ported 10 benchmark programs of nbench to the SGX environment. Our evaluation results look promising. T-SGX is † The two lead authors contributed equally to this work. ⋆ The author did part of this work during an intership at Microsoft Research. an order of magnitude faster than the state-of-the-art mitigation schemes. On our benchmarks, T-SGX incurs on average 50% performance overhead and less than 30% storage overhead.", "title": "" }, { "docid": "394fcbcb013951dbc01fdbc713ac6e62", "text": "We present an approach to text simplification based on synchronous dependency grammars. The higher level of abstraction afforded by dependency representations allows for a linguistically sound treatment of complex constructs requiring reordering and morphological change, such as conversion of passive voice to active. We present a synchronous grammar formalism in which it is easy to write rules by hand and also acquire them automatically from dependency parses of aligned English and Simple English sentences. The grammar formalism is optimised for monolingual translation in that it reuses ordering information from the source sentence where appropriate. We demonstrate the superiority of our approach over a leading contemporary system based on quasi-synchronous tree substitution grammars, both in terms of expressivity and performance.", "title": "" }, { "docid": "b7046fb7619949b9a03823450c19a8d5", "text": "We introduce a model that learns active learning algorithms via metalearning. For a distribution of related tasks, our model jointly learns: a data representation, an item selection heuristic, and a prediction function. Our model uses the item selection heuristic to construct a labeled support set for training the prediction function. Using the Omniglot and MovieLens datasets, we test our model in synthetic and practical settings.", "title": "" }, { "docid": "93df255e39f57dd2167191da4b90540f", "text": "OBJECTIVE\nParkinsonian patients have abnormal oscillatory activity within the basal ganglia-thalamocortical circuitry. Particularly, excessive beta band oscillations are thought to be associated with akinesia. We studied whether cortical spontaneous activity is modified by deep brain stimulation (DBS) in advanced Parkinson's disease and if the modifications are related to the clinical symptoms.\n\n\nMETHODS\nWe studied the effects of bilateral electrical stimulation of subthalamic nucleus (STN) on cortical spontaneous activity by magnetoencephalography (MEG) in 11 Parkinsonian patients. The artifacts produced by DBS were suppressed by tSSS algorithm.\n\n\nRESULTS\nDuring DBS, UPDRS (Unified Parkinson's Disease Rating Scale) rigidity scores correlated with 6-10 Hz and 12-20 Hz somatomotor source strengths when eyes were open. When DBS was off UPDRS action tremor scores correlated with pericentral 6-10 Hz and 21-30 Hz and occipital alpha source strengths when eyes open. Occipital alpha strength decreased during DBS when eyes closed. The peak frequency of occipital alpha rhythm correlated negatively with total UPDRS motor scores and with rigidity subscores, when eyes closed.\n\n\nCONCLUSION\nSTN DBS modulates brain oscillations both in alpha and beta bands and these oscillations reflect the clinical condition during DBS.\n\n\nSIGNIFICANCE\nMEG combined with an appropriate artifact rejection method enables studies of DBS effects in Parkinson's disease and presumably also in the other emerging DBS indications.", "title": "" }, { "docid": "02e0514dc8b7bfa65b55a0e8969dd0ad", "text": "A detailed comparison was made of two methods for assessing the features of eating disorders. An investigator-based interview was compared with a self-report questionnaire based directly on that interview. A number of important discrepancies emerged. Although the two measures performed similarly with respect to the assessment of unambiguous behavioral features such as self-induced vomiting and dieting, the self-report questionnaire generated higher scores than the interview when assessing more complex features such as binge eating and concerns about shape. Both methods underestimated body weight.", "title": "" }, { "docid": "911ea52fa57524e002154e2fe276ac44", "text": "Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard.1 The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation.", "title": "" }, { "docid": "e31ea6b8c4a5df049782b463abc602ea", "text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.", "title": "" } ]
scidocsrr
98f15dcee44b3b0014a0dc70c2ba6fca
Survey on distance metric learning and dimensionality reduction in data mining
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "effa64c878add2a55a804415cb7c8169", "text": "Dimensionality reduction is an important issue in many machine learning and pattern recognition applications, and the trace ratio (TR) problem is an optimization problem involved in many dimensionality reduction algorithms. Conventionally, the solution is approximated via generalized eigenvalue decomposition due to the difficulty of the original problem. However, prior works have indicated that it is more reasonable to solve it directly than via the conventional way. In this brief, we propose a theoretical overview of the global optimum solution to the TR problem via the equivalent trace difference problem. Eigenvalue perturbation theory is introduced to derive an efficient algorithm based on the Newton-Raphson method. Theoretical issues on the convergence and efficiency of our algorithm compared with prior literature are proposed, and are further supported by extensive empirical results.", "title": "" }, { "docid": "7655df3f32e6cf7a5545ae2231f71e7c", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" } ]
[ { "docid": "f136e875f021ea3ea67a87c6d0b1e869", "text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.", "title": "" }, { "docid": "625c5c89b9f0001a3eed1ec6fb498c23", "text": "About a 100 years ago, the Drosophila white mutant marked the birth of Drosophila genetics. The white gene turned out to encode the first well studied ABC transporter in arthropods. The ABC gene family is now recognized as one of the largest transporter families in all kingdoms of life. The majority of ABC proteins function as primary-active transporters that bind and hydrolyze ATP while transporting a large diversity of substrates across lipid membranes. Although extremely well studied in vertebrates for their role in drug resistance, less is known about the role of this family in the transport of endogenous and exogenous substances in arthropods. The ABC families of five insect species, a crustacean and a chelicerate have been annotated in some detail. We conducted a thorough phylogenetic analysis of the seven arthropod and human ABC protein subfamilies, to infer orthologous relationships that might suggest conserved function. Most orthologous relationships were found in the ABCB half transporter, ABCD, ABCE and ABCF subfamilies, but specific expansions within species and lineages are frequently observed and discussed. We next surveyed the role of ABC transporters in the transport of xenobiotics/plant allelochemicals and their involvement in insecticide resistance. The involvement of ABC transporters in xenobiotic resistance in arthropods is historically not well documented, but an increasing number of studies using unbiased differential gene expression analysis now points to their importance. We give an overview of methods that can be used to link ABC transporters to resistance. ABC proteins have also recently been implicated in the mode of action and resistance to Bt toxins in Lepidoptera. Given the enormous interest in Bt toxicology in transgenic crops, such findings will provide an impetus to further reveal the role of ABC transporters in arthropods. 2014 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license.", "title": "" }, { "docid": "552ad2b05d0e7812bb5e17fb22c3de28", "text": "Behavior-based agents are becoming increasingly used across a variety of platforms. The common approach to building such agents involves implementing the behavior synchronization and management algorithms directly in the agent’s programming environment. This process makes it hard, if not impossible, to share common components of a behavior architecture across different agent implementations. This lack of reuse also makes it cumbersome to experiment with different behavior architectures as it forces users to manipulate native code directly, e.g. C++ or Java. In this paper, we provide a high-level behavior-centric programming language and an automated code generation system which together overcome these issues and facilitate the process of implementing and experimenting with different behavior architectures. The language is specifically designed to allow clear and precise descriptions of a behavior hierarchy, and can be automatically translated by our generator into C++ code. Once compiled, this C++ code yields an executable that directs the execution of behaviors in the agent’s sense-plan-act cycle. We have tested this process with different platforms, including both software and robot agents, with various behavior architectures. We experienced the advantages of defining an agent by directly reasoning at the behavior architecture level followed by the automatic native code generation.", "title": "" }, { "docid": "3535e70b1c264d99eff5797413650283", "text": "MIMO is one of the techniques used in LTE Release 8 to achieve very high data rates. A field trial was performed in a pre-commercial LTE network. The objective is to investigate how well MIMO works with realistically designed handhelds in band 13 (746-756 MHz in downlink). In total, three different handheld designs were tested using antenna mockups. In addition to the mockups, a reference antenna design with less stringent restrictions on physical size and excellent properties for MIMO was used. The trial comprised test drives in areas with different characteristics and with different network load levels. The effects of hands holding the devices and the effect of using the device inside a test vehicle were also investigated. In general, it is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMO. In fact, the best of the handhelds performed similar to the reference antenna.", "title": "" }, { "docid": "8aa305f217314d60ed6c9f66d20a7abf", "text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.", "title": "" }, { "docid": "9c89c4c4ae75f9b003fca6696163619a", "text": "We study a class of stochastic optimization models of expected utility in markets with stochastically changing investment opportunities. The prices of the primitive assets are modelled as diffusion processes whose coefficients evolve according to correlated diffusion factors. Under certain assumptions on the individual preferences, we are able to produce reduced form solutions. Employing a power transformation, we express the value function in terms of the solution of a linear parabolic equation, with the power exponent depending only on the coefficients of correlation and risk aversion. This reduction facilitates considerably the study of the value function and the characterization of the optimal hedging demand. The new results demonstrate an interesting connection with valuation techniques using stochastic differential utilities and also, with distorted measures in a dynamic setting.", "title": "" }, { "docid": "d3d57d67d4384f916f9e9e48f3fcdcdb", "text": "Web-based social networks have become popular as a medium for disseminating information and connecting like-minded people. The public accessibility of such networks with the ability to share opinions, thoughts, information, and experience offers great promise to enterprises and governments. In addition to individuals using such networks to connect to their friends and families, governments and enterprises have started exploiting these platforms for delivering their services to citizens and customers. However, the success of such attempts relies on the level of trust that members have with each other as well as with the service provider. Therefore, trust becomes an essential and important element of a successful social network. In this article, we present the first comprehensive review of social and computer science literature on trust in social networks. We first review the existing definitions of trust and define social trust in the context of social networks. We then discuss recent works addressing three aspects of social trust: trust information collection, trust evaluation, and trust dissemination. Finally, we compare and contrast the literature and identify areas for further research in social trust.", "title": "" }, { "docid": "405a1e8badfb85dcd1d5cc9b4a0026d2", "text": "It is of great practical importance to improve yield and quality of vegetables in soilless cultures. This study investigated the effects of iron-nutrition management on yield and quality of hydroponic-cultivated spinach (Spinacia oleracea L.). The results showed that mild Fe-deficient treatment (1 μM FeEDTA) yielded a greater biomass of edible parts than Fe-omitted treatment (0 μM FeEDTA) or Fe-sufficient treatments (10 and 50 μM FeEDTA). Conversely, mild Fe-deficient treatment had the lowest nitrate concentration in the edible parts out of all the Fe treatments. Interestingly, all the concentrations of soluble sugar, soluble protein and ascorbate in mild Fe-deficient treatments were higher than Fe-sufficient treatments. In addition, both phenolic concentration and DPPH scavenging activity in mild Fe-deficient treatments were comparable with those in Fe-sufficient treatments, but were higher than those in Fe-omitted treatments. Therefore, we concluded that using a mild Fe-deficient nutrition solution to cultivate spinach not only would increase yield, but also would improve quality.", "title": "" }, { "docid": "781ebbf85a510cfd46f0c824aa4aba7e", "text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.", "title": "" }, { "docid": "0c805b994e89c878a62f2e1066b0a8e7", "text": "3D spatial data modeling is one of the key research problems in 3D GIS. More and more applications depend on these 3D spatial data. Mostly, these data are stored in Geo-DBMSs. However, recent Geo-DBMSs do not support 3D primitives modeling, it only able to describe a single-attribute of the third-dimension, i.e. modeling 2.5D datasets that used 2D primitives (plus a single z-coordinate) such as polygons in 3D space. This research focuses on 3D topological model based on space partition for 3D GIS, for instance, 3D polygons or tetrahedron form a solid3D object. Firstly, this report discusses formal definitions of 3D spatial objects, and then all the properties of each object primitives will be elaborated in detailed. The author also discusses methods for constructing the topological properties to support object semantics is introduced. The formal framework to describe the spatial model, database using Oracle Spatial is also given in this report. All related topological structures that forms the object features are discussed in detail. All related features are tested using real 3D spatial dataset of 3D building. Finally, the report concludes the experiment via visualization of using AutoDesk Map 3D.", "title": "" }, { "docid": "1b030e734e3ddfb5e612b1adc651b812", "text": "Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes.\n The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.", "title": "" }, { "docid": "d71040311b8753299377b02023ba5b4c", "text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "title": "" }, { "docid": "dc2ea774fb11bc09e80b9de3acd7d5a6", "text": "The Hough transform is a well-known straight line detection algorithm and it has been widely used for many lane detection algorithms. However, its real-time operation is not guaranteed due to its high computational complexity. In this paper, we designed a Hough transform hardware accelerator on FPGA to process it in real time. Its FPGA logic area usage was reduced by limiting the angles of the lines to (-20, 20) degrees which are enough for lane detection applications, and its arithmetic computations were performed in parallel to speed up the processing time. As a result of FPGA synthesis using Xilinx Vertex-5 XC5VLX330 device, it occupies 4,521 slices and 25.6Kbyte block memory giving performance of 10,000fps in VGA images(5000 edge points). The proposed hardware on FPGA (0.1ms) is 450 times faster than the software implementation on ARM Cortex-A9 1.4GHz (45ms). Our Hough transform hardware was verified by applying it to the newly developed LDWS (lane departure warning system).", "title": "" }, { "docid": "dd726458660c3dfe05bd775df562e188", "text": "Maternally deprived rats were treated with tianeptine (15 mg/kg) once a day for 14 days during their adult phase. Their behavior was then assessed using the forced swimming and open field tests. The BDNF, NGF and energy metabolism were assessed in the rat brain. Deprived rats increased the immobility time, but tianeptine reversed this effect and increased the swimming time; the BDNF levels were decreased in the amygdala of the deprived rats treated with saline and the BDNF levels were decreased in the nucleus accumbens within all groups; the NGF was found to have decreased in the hippocampus, amygdala and nucleus accumbens of the deprived rats; citrate synthase was increased in the hippocampus of non-deprived rats treated with tianeptine and the creatine kinase was decreased in the hippocampus and amygdala of the deprived rats; the mitochondrial complex I and II–III were inhibited, and tianeptine increased the mitochondrial complex II and IV in the hippocampus of the non-deprived rats; the succinate dehydrogenase was increased in the hippocampus of non-deprived rats treated with tianeptine. So, tianeptine showed antidepressant effects conducted on maternally deprived rats, and this can be attributed to its action on the neurochemical pathways related to depression.", "title": "" }, { "docid": "79593cc56da377d834f33528b833641f", "text": "Machine learning offers a fantastically powerful toolkit f or building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the framework of technical debt , we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning. The goal of this paper is hig hlight several machine learning specific risk factors and design patterns to b e avoided or refactored where possible. These include boundary erosion, entanglem ent, hidden feedback loops, undeclared consumers, data dependencies, changes i n the external world, and a variety of system-level anti-patterns. 1 Machine Learning and Complex Systems Real world software engineers are often faced with the chall enge of moving quickly to ship new products or services, which can lead to a dilemma between spe ed of execution and quality of engineering. The concept of technical debtwas first introduced by Ward Cunningham in 1992 as a way to help quantify the cost of such decisions. Like incurri ng fiscal debt, there are often sound strategic reasons to take on technical debt. Not all debt is n ecessarily bad, but technical debt does tend to compound. Deferring the work to pay it off results in i ncreasing costs, system brittleness, and reduced rates of innovation. Traditional methods of paying off technical debt include re factoring, increasing coverage of unit tests, deleting dead code, reducing dependencies, tighten ng APIs, and improving documentation [4]. The goal of these activities is not to add new functionality, but to make it easier to add future improvements, be cheaper to maintain, and reduce the likeli hood of bugs. One of the basic arguments in this paper is that machine learn ing packages have all the basic code complexity issues as normal code, but also have a larger syst em-level complexity that can create hidden debt. Thus, refactoring these libraries, adding bet ter unit tests, and associated activity is time well spent but does not necessarily address debt at a systems level. In this paper, we focus on the system-level interaction betw e n machine learning code and larger systems as an area where hidden technical debt may rapidly accum ulate. At a system-level, a machine learning model may subtly erode abstraction boundaries. It may be tempting to re-use input signals in ways that create unintended tight coupling of otherw ise disjoint systems. Machine learning packages may often be treated as black boxes, resulting in la rge masses of “glue code” or calibration layers that can lock in assumptions. Changes in the exte rnal world may make models or input signals change behavior in unintended ways, ratcheting up m aintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating s intended may be difficult without careful design.", "title": "" }, { "docid": "6cad42e549f449c7156b0a07e2e02726", "text": "Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.", "title": "" }, { "docid": "d59d1ac7b3833ee1e60f7179a4a9af99", "text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.", "title": "" }, { "docid": "fedcb2bd51b9fd147681ae23e03c7336", "text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.", "title": "" }, { "docid": "a89c0a16d161ef41603583567f85a118", "text": "360° Video services with resolutions of UHD and beyond for Virtual Reality head mounted displays are a challenging task due to limits of video decoders in constrained end devices. Adaptivity to the current user viewport is a promising approach but incurs significant encoding overhead when encoding per user or set of viewports. A more efficient way to achieve viewport adaptive streaming is to facilitate motion-constrained HEVC tiles. Original content resolution within the user viewport is preserved while content currently not presented to the user is delivered in lower resolution. A lightweight aggregation of varying resolution tiles into a single HEVC bitstream can be carried out on-the-fly and allows usage of a single decoder instance on the end device.", "title": "" }, { "docid": "241f5a88f53c929cc11ce0edce191704", "text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.", "title": "" } ]
scidocsrr
22c5fd7ddba330aa4189160f32fafa49
Text Embeddings for Retrieval From a Large Knowledge Base
[ { "docid": "b5a5f8fc7015e8a9632376b81fdfcaa6", "text": "Despite the fast developmental pace of new sentence embedding methods, it is still challenging to find comprehensive evaluations of these different techniques. In the past years, we saw significant improvements in the field of sentence embeddings and especially towards the development of universal sentence encoders that could provide inductive transfer to a wide variety of downstream tasks. In this work, we perform a comprehensive evaluation of recent methods using a wide variety of downstream and linguistic feature probing tasks. We show that a simple approach using bag-of-words with a recently introduced language model for deep contextdependent word embeddings proved to yield better results in many tasks when compared to sentence encoders trained on entailment datasets. We also show, however, that we are still far away from a universal encoder that can perform consistently across several downstream tasks.", "title": "" } ]
[ { "docid": "ad48ca7415808c4337c0b6eb593005d6", "text": "Neuroscience is experiencing a data revolution in which many hundreds or thousands of neurons are recorded simultaneously. Currently, there is little consensus on how such data should be analyzed. Here we introduce LFADS (Latent Factor Analysis via Dynamical Systems), a method to infer latent dynamics from simultaneously recorded, single-trial, high-dimensional neural spiking data. LFADS is a sequential model based on a variational auto-encoder. By making a dynamical systems hypothesis regarding the generation of the observed data, LFADS reduces observed spiking to a set of low-dimensional temporal factors, per-trial initial conditions, and inferred inputs. We compare LFADS to existing methods on synthetic data and show that it significantly out-performs them in inferring neural firing rates and latent dynamics.", "title": "" }, { "docid": "0d48e7715f3e0d74407cc5a21f2c322a", "text": "Every teacher of linear algebra should be familiar with the matrix singular value decomposition (or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well known theory of diagonalization for symmetric matrices makes the topic immediately accessible to linear algebra teachers, and indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Strang was aware of these facts when he introduced the SVD in his now classical text [22, page 142], observing", "title": "" }, { "docid": "4035273cce65e3fe73e0a000c1726c0d", "text": "In recent years, organizations have invested heavily in e-procurement technology solutions. However, an estimation of the value of the technology-enabled procurement process is often lacking. Our paper presents a rigorous methodological approach to the analysis of e-procurement benefits. Business process simulations are used to analyze the benefits of both technological and organizational changes related to e-procurement. The approach enables an estimation of both the average and variability of procurement costs and benefits, workload, and lead times. In addition, the approach enables optimization of a procurement strategy (e.g., approval levels). Finally, an innovative approach to estimation of value at risk is shown.", "title": "" }, { "docid": "06caed57da5784de254b5efcf1724003", "text": "The validity of any traffic simulation model depends on its ability to generate representative driver acceleration profiles. This paper studies the effectiveness of recurrent neural networks in predicting the acceleration distributions for car following on highways. The long short-term memory recurrent networks are trained and used to propagate the simulated vehicle trajectories over 10-s horizons. On the basis of several performance metrics, the recurrent networks are shown to generally match or outperform baseline methods in replicating driver behavior, including smoothness and oscillatory characteristics present in real trajectories. This paper reveals that the strong performance is due to the ability of the recurrent network to identify recent trends in the ego-vehicle's state, and recurrent networks are shown to perform as, well as feedforward networks with longer histories as inputs.", "title": "" }, { "docid": "b8466da90f2e75df2cc8453564ddb3e8", "text": "Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.", "title": "" }, { "docid": "47c5f3a7230ac19b8889ced2d8f4318a", "text": "This paper deals with the setting parameter optimization procedure for a multi-phase induction heating system considering transverse flux heating. This system is able to achieve uniform static heating of different thin/size metal pieces without movable inductor parts, yokes or magnetic screens. The goal is reached by the predetermination of the induced power density distribution using an optimization procedure that leads to the required inductor supplying currents. The purpose of the paper is to describe the optimization program with the different solution obtained and to show that some compromise must be done between the accuracy of the temperature profile and the energy consumption.", "title": "" }, { "docid": "0358eea62c126243134ed1cd2ac97121", "text": "In the absence of vision, grasping an object often relies on tactile feedback from the ngertips. As the nger pushes the object, the ngertip can feel the contact point move. If the object is known in advance, from this motion the nger may infer the location of the contact point on the object and thereby the object pose. This paper primarily investigates the problem of determining the pose (orientation and position) and motion (velocity and angular velocity) of a planar object with known geometry from such contact motion generated by pushing. A dynamic analysis of pushing yields a nonlinear system that relates through contact the object pose and motion to the nger motion. The contact motion on the ngertip thus encodes certain information about the object pose. Nonlinear observability theory is employed to show that such information is su cient for the nger to \\observe\" not only the pose but also the motion of the object. Therefore a sensing strategy can be realized as an observer of the nonlinear dynamical system. Two observers are subsequently introduced. The rst observer, based on the result of [15], has its \\gain\" determined by the solution of a Lyapunov-like equation; it can be activated at any time instant during a push. The second observer, based on Newton's method, solves for the initial (motionless) object pose from three intermediate contact points during a push. Under the Coulomb friction model, the paper copes with support friction in the plane and/or contact friction between the nger and the object. Extensive simulations have been done to demonstrate the feasibility of the two observers. Preliminary experiments (with an Adept robot) have also been conducted. A contact sensor has been implemented using strain gauges. Accepted by the International Journal of Robotics Research.", "title": "" }, { "docid": "33bd561e2d8e1799d5d5156cbfe3f2e5", "text": "OBJECTIVE\nTo assess the effects of Balint groups on empathy measured by the Consultation And Relational Empathy Measure (CARE) scale rated by standardized patients during objective structured clinical examination and self-rated Jefferson's School Empathy Scale - Medical Student (JSPE-MS©) among fourth-year medical students.\n\n\nMETHODS\nA two-site randomized controlled trial were planned, from October 2015 to December 2015 at Paris Diderot and Paris Descartes University, France. Eligible students were fourth-year students who gave their consent to participate. Participants were allocated in equal proportion to the intervention group or to the control group. Participants in the intervention group received a training of 7 sessions of 1.5-hour Balint groups, over 3months. The main outcomes were CARE and the JSPE-MS© scores at follow-up.\n\n\nRESULTS\nData from 299 out of 352 randomized participants were analyzed: 155 in the intervention group and 144 in the control group, with no differences in baseline measures. There was no significant difference in CARE score at follow-up between the two groups (P=0.49). The intervention group displayed significantly higher JSPE-MS© score at follow-up than the control group [Mean (SD): 111.9 (10.6) versus 107.7 (12.7), P=0.002]. The JSPE-MS© score increased from baseline to follow-up in the intervention group, whereas it decreased in the control group [1.5 (9.1) versus -1.8 (10.8), P=0.006].\n\n\nCONCLUSIONS\nBalint groups may contribute to promote clinical empathy among medical students.\n\n\nTRIAL REGISTRATION\nNCT02681380.", "title": "" }, { "docid": "b5515ce58a5f40fb5129560c9bdc3b10", "text": "Lipoid pneumonia in children follows mineral oil aspiration and may result in acute respiratory failure. Majority of the patients recover without long-term morbidity, though a few may be left with residual damage to the lungs. We report a case of a two-and-a-half-year-old child with persistent lipoid pneumonia following accidental inhalation of machine oil, who was successfully treated with steroids.", "title": "" }, { "docid": "f7f1deeda9730056876db39b4fe51649", "text": "Fracture in bone occurs when an external force exercised upon the bone is more than what the bone can tolerate or bear. As, its consequence structure and muscular power of the bone is disturbed and bone becomes frail, which causes tormenting pain on the bone and ends up in the loss of functioning of bone. Accurate bone structure and fracture detection is achieved using various algorithms which removes noise, enhances image details and highlights the fracture region. Automatic detection of fractures from x-ray images is considered as an important process in medical image analysis by both orthopaedic and radiologic aspect. Manual examination of x-rays has multitude drawbacks. The process is time consuming and subjective. In this paper we discuss several digital image processing techniques applied in fracture detection of bone. This led us to study techniques that have been applied to images obtained from different modalities like x-ray, CT, MRI and ultrasound. Keywords— Fracture detection, Medical Imaging, Morphology, Tibia, X-ray image", "title": "" }, { "docid": "d107d7bdfa1cd24985ec49b54b267ba7", "text": "The classification and the count of white blood cells in microscopy images allows the in vivo assessment of a wide range of important hematic pathologies (i.e., from presence of infections to leukemia). Nowadays, the morphological cell classification is typically made by experienced operators. Such a procedure presents undesirable drawbacks: slowness and it presents a not standardized accuracy since it depends on the operator's capabilities and tiredness. Only few attempts of partial/full automated systems based on image-processing systems are present in literature and they are still at prototype stage. This paper presents a methodology to achieve an automated detection and classification of leucocytes by microscope color images. The proposed system firstly individuates in the blood image the leucocytes from the others blood cells, then it extracts morphological indexes and finally it classifies the leucocytes by a neural classifier in Basophil, Eosinophil, Lymphocyte, Monocyte and Neutrophil.", "title": "" }, { "docid": "04f4058d37a33245abf8ed9acd0af35d", "text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.", "title": "" }, { "docid": "b93a949c1c509bf8e5d36a9ec2cb37a5", "text": "At first glance, agile methods and global software development might seem incompatible. Agile methods stress continuous face-to-face communication, whereas communication has been reported as the biggest problem of global software development. One challenge to solve is how to apply agile practices in settings where continuous face-to-face interaction is missing. However, agile methods have been successfully used in distributed projects, indicating that they could benefit global software development. This paper discusses potential benefits and challenges of adopting agile methods in global software development. The literature on real industrial case studies reporting on experiences of using agile methods in distributed projects is still scarce. Therefore we suggest further research on the topic. We present our plans for research in companies using agile methods in their distributed projects. We also intend to test the use of agile principles in globally distributed student projects developing software for industrial clients", "title": "" }, { "docid": "b174bbcb91d35184674532b6ab22dcdf", "text": "Many studies have confirmed the benefit of gamification on learners’ motivation. However, gamification may also demotivate some learners, or learners may focus on the gamification elements instead of the learning content. Some researchers have recommended building learner models that can be used to adapt gamification elements based on learners’ personalities. Building such a model requires a strong understanding of the relationship between gamification and personality. Existing empirical work has focused on measuring knowledge gain and learner preference. These findings may not be reliable because the analyses are based on learners who complete the study and because they rely on self-report from learners. This preliminary study explores a different approach by allowing learners to drop out at any time and then uses the number of students left as a proxy for motivation and engagement. Survival analysis is used to analyse the data. The results confirm the benefits of gamification and provide some pointers to how this varies with personality.", "title": "" }, { "docid": "b271916d455789760d1aa6fda6af85c3", "text": "Over the last decade, automated vehicles have been widely researched and their massive potential has been verified through several milestone demonstrations. However, there are still many challenges ahead. One of the biggest challenges is integrating them into urban environments in which dilemmas occur frequently. Conventional automated driving strategies make automated vehicles foolish in dilemmas such as making lane-change in heavy traffic, handling a yellow traffic light and crossing a double-yellow line to pass an illegally parked car. In this paper, we introduce a novel automated driving strategy that allows automated vehicles to tackle these dilemmas. The key insight behind our automated driving strategy is that expert drivers understand human interactions on the road and comply with mutually-accepted rules, which are learned from countless experiences. In order to teach the driving strategy of expert drivers to automated vehicles, we propose a general learning framework based on maximum entropy inverse reinforcement learning and Gaussian process. Experiments are conducted on a 5.2 km-long campus road at Seoul National University and demonstrate that our framework performs comparably to expert drivers in planning trajectories to handle various dilemmas.", "title": "" }, { "docid": "fe446f500549cedce487b78a133cbc45", "text": "Drug addiction manifests as a compulsive drive to take a drug despite serious adverse consequences. This aberrant behaviour has traditionally been viewed as bad 'choices' that are made voluntarily by the addict. However, recent studies have shown that repeated drug use leads to long-lasting changes in the brain that undermine voluntary control. This, combined with new knowledge of how environmental, genetic and developmental factors contribute to addiction, should bring about changes in our approach to the prevention and treatment of addiction.", "title": "" }, { "docid": "32faa5a14922d44101281c783cf6defb", "text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.", "title": "" }, { "docid": "3615093867394664629391e515fd4118", "text": "Using individual data on voting and political parties manifestos in European countries, we empirically characterize the drivers of voting for populist parties (the demand side) as well as the presence of populist parties (the supply side). We show that the economic insecurity drivers of the demand of populism are significant, especially when considering the key interactions with turnout incentives, neglected in previous studies. Once turnout effects are taken into account, economic insecurity drives consensus to populist policies directly and through indirect negative effects on trust and attitudes towards immigrants. On the supply side, populist parties are more likely to emerge when countries are faced with a systemic crisis of economic security. The orientation choice of populist parties, i.e., whether they arise on left or right of the political spectrum, is determined by the availability of political space. The typical mainstream parties response is to reduce the distance of their platform from that of successful populist entrants, amplifying the aggregate supply of populist policies.", "title": "" }, { "docid": "feda50d2876074ce37276d6df7d2823f", "text": "Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications. However, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that reflect social constructs. To address this concern, in this paper, we propose a novel training procedure for learning gender-neutral word embeddings. Our approach aims to preserve gender information in certain dimensions of word vectors while compelling other dimensions to be free of gender influence. Based on the proposed method, we generate a GenderNeutral variant of GloVe (GN-GloVe). Quantitative and qualitative experiments demonstrate that GN-GloVe successfully isolates gender information without sacrificing the functionality of the embedding model.", "title": "" }, { "docid": "cee9b099f6ea087376b56067620e1c64", "text": "This paper presents a set of techniques for predicting aggressive comments in social media. In a time when cyberbullying has, unfortunately, made its entrance into society and Internet, it becomes necessary to find ways for preventing and overcoming this phenomenon. One of these concerns the use of machine learning techniques for automatically detecting cases of cyberbullying; a primary task within this cyberbullying detection consists of aggressive text detection. We concretely explore different computational techniques for carrying out this task, either as a classification or as a regression problem, and our results suggest that a key feature is the identification of profane words.", "title": "" } ]
scidocsrr
4b4e825bb799efb55611bb1a9827e2ba
Semantic Relation Classification via Hierarchical Recurrent Neural Network with Attention
[ { "docid": "7927dffe38cec1ce2eb27dbda644a670", "text": "This paper describes our system for SemEval-2010 Task 8 on multi-way classification of semantic relations between nominals. First, the type of semantic relation is classified. Then a relation typespecific classifier determines the relation direction. Classification is performed using SVM classifiers and a number of features that capture the context, semantic role affiliation, and possible pre-existing relations of the nominals. This approach achieved an F1 score of 82.19% and an accuracy of 77.92%.", "title": "" } ]
[ { "docid": "7411ae149016be794566261d7362f7d3", "text": "BACKGROUND\nProcrastination, to voluntarily delay an intended course of action despite expecting to be worse-off for the delay, is a persistent behavior pattern that can cause major psychological suffering. Approximately half of the student population and 15%-20% of the adult population are presumed having substantial difficulties due to chronic and recurrent procrastination in their everyday life. However, preconceptions and a lack of knowledge restrict the availability of adequate care. Cognitive behavior therapy (CBT) is often considered treatment of choice, although no clinical trials have previously been carried out.\n\n\nOBJECTIVE\nThe aim of this study will be to test the effects of CBT for procrastination, and to investigate whether it can be delivered via the Internet.\n\n\nMETHODS\nParticipants will be recruited through advertisements in newspapers, other media, and the Internet. Only people residing in Sweden with access to the Internet and suffering from procrastination will be included in the study. A randomized controlled trial with a sample size of 150 participants divided into three groups will be utilized. The treatment group will consist of 50 participants receiving a 10-week CBT intervention with weekly therapist contact. A second treatment group with 50 participants receiving the same treatment, but without therapist contact, will also be employed. The intervention being used for the current study is derived from a self-help book for procrastination written by one of the authors (AR). It includes several CBT techniques commonly used for the treatment of procrastination (eg, behavioral activation, behavioral experiments, stimulus control, and psychoeducation on motivation and different work methods). A control group consisting of 50 participants on a wait-list control will be used to evaluate the effects of the CBT intervention. For ethical reasons, the participants in the control group will gain access to the same intervention following the 10-week treatment period, albeit without therapist contact.\n\n\nRESULTS\nThe current study is believed to result in three important findings. First, a CBT intervention is assumed to be beneficial for people suffering from problems caused by procrastination. Second, the degree of therapist contact will have a positive effect on treatment outcome as procrastination can be partially explained as a self-regulatory failure. Third, an Internet based CBT intervention is presumed to be an effective way to administer treatment for procrastination, which is considered highly important, as the availability of adequate care is limited. The current study is therefore believed to render significant knowledge on the treatment of procrastination, as well as providing support for the use of Internet based CBT for difficulties due to delayed tasks and commitments.\n\n\nCONCLUSIONS\nTo our knowledge, the current study is the first clinical trial to examine the effects of CBT for procrastination, and is assumed to render significant knowledge on the treatment of procrastination, as well as investigating whether it can be delivered via the Internet.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov: NCT01842945; http://clinicaltrials.gov/show/NCT01842945 (Archived by WebCite at http://www.webcitation.org/6KSmaXewC).", "title": "" }, { "docid": "0939a703cb2eeb9396c4e681f95e1e4d", "text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.", "title": "" }, { "docid": "e5a936bbd9e6dc0189b7cc18268f0f87", "text": "A new method of obtaining amplitude modulation (AM) for determining target location with spinning reticles is presented. The method is based on the use of graded transmission capabilities. The AM spinning reticles previously presented were functions of three parameters: amplitude vs angle, amplitude vs radius, and phase. This paper presents these parameters along with their capabilities and limitations and shows that multiple parameters can be integrated into a single reticle. It is also shown that AM parameters can be combined with FM parameters in a single reticle. Also, a general equation is developed that relates the AM parameters to a reticle transmission equation.", "title": "" }, { "docid": "28d16f96ee1b7789666352f48876fbc4", "text": "The non-data components of a visualization, such as axes and legends, can often be just as important as the data itself. They provide contextual information essential to interpreting the data. In this paper, we describe an automated system for choosing positions and labels for axis tick marks. Our system extends Wilkinson's optimization-based labeling approach to create a more robust, full-featured axis labeler. We define an expanded space of axis labelings by automatically generating additional nice numbers as needed and by permitting the extreme labels to occur inside the data range. These changes provide flexibility in problematic cases, without degrading quality elsewhere. We also propose an additional optimization criterion, legibility, which allows us to simultaneously optimize over label formatting, font size, and orientation. To solve this revised optimization problem, we describe the optimization function and an efficient search algorithm. Finally, we compare our method to previous work using both quantitative and qualitative metrics. This paper is a good example of how ideas from automated graphic design can be applied to information visualization.", "title": "" }, { "docid": "325e06672549b9325fbc767375266ccc", "text": "Bio-inspired algorithms like Genetic Algorithms and Fuzzy Inference Systems (FIS) are nowadays widely adopted as hybrid techniques in commercial and industrial environment. In this paper we present an interesting application of the fuzzy-GA paradigm to Smart Grids. The main aim consists in performing decision making for power flow management tasks in the proposed microgrid model equipped by renewable sources and an energy storage system, taking into account the economical profit in energy trading with the main-grid. In particular this study focuses on the application of a Hierarchical Genetic Algorithm (HGA) for tuning the Rule Base (RB) of a Fuzzy Inference System (FIS), trying to discover a minimal fuzzy rules set in a Fuzzy Logic Controller (FLC) adopted to perform decision making in the microgrid. The HGA rationale focuses on a particular encoding scheme, based on control genes and parametric genes applied to the optimization of the FIS parameters, allowing to perform a reduction in the structural complexity of the RB. This approach will be referred in the following as fuzzy-HGA. Results are compared with a simpler approach based on a classic fuzzy-GA scheme, where both FIS parameters and rule weights are tuned, while the number of fuzzy rules is fixed in advance. Experiments shows how the fuzzy-HGA approach adopted for the synthesis of the proposed controller outperforms the classic fuzzy-GA scheme, increasing the accounting profit by 67% in the considered energy trading problem yielding at the same time a simpler RB. keywords: Microgrid, Energy Management System, Battery Energy Storage, Power Flow Optimization, Storage System Management, Fuzzy Systems, Evolutionary Computation, Hierarchical Genetic Algorithms.", "title": "" }, { "docid": "cae661146bc0156af25d8014cb61ef0b", "text": "The two critical factors distinguishing inventory management in a multifirm supply-chain context from the more traditional centrally planned perspective are incentive conflicts and information asymmetries. We study the well-known order quantity/reorder point (Q r) model in a two-player context, using a framework inspired by observations during a case study. We show how traditional allocations of decision rights to supplier and buyer lead to inefficient outcomes, and we use principal-agent models to study the effects of information asymmetries about setup cost and backorder cost, respectively. We analyze two “opposite” models of contracting on inventory policies. First, we derive the buyer’s optimal menu of contracts when the supplier has private information about setup cost, and we show how consignment stock can help reduce the impact of this information asymmetry. Next, we study consignment and assume the supplier cannot observe the buyer’s backorder cost. We derive the supplier’s optimal menu of contracts on consigned stock level and show that in this case, the supplier effectively has to overcompensate the buyer for the cost of each stockout. Our theoretical analysis and the case study suggest that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs. This framework immediately points to practical recommendations on how supply-chain incentives should be realigned to overcome existing information asymmetries.", "title": "" }, { "docid": "b1d1196f064bce5c1f6df75a6a5f8bb2", "text": "Studies of ad hoc wireless networks are a relatively new field gaining more popularity for various new applications. In these networks, the Medium Access Control (MAC) protocols are responsible for coordinating the access from active nodes. These protocols are of significant importance since the wireless communication channel is inherently prone to errors and unique problems such as the hidden-terminal problem, the exposedterminal problem, and signal fading effects. Although a lot of research has been conducted on MAC protocols, the various issues involved have mostly been presented in isolation of each other. We therefore make an attempt to present a comprehensive survey of major schemes, integrating various related issues and challenges with a view to providing a big-picture outlook to this vast area. We present a classification of MAC protocols and their brief description, based on their operating principles and underlying features. In conclusion, we present a brief summary of key ideas and a general direction for future work.", "title": "" }, { "docid": "2ebe6832af61085200d4aef27f2be3a5", "text": "This paper deals with the development and the parameter identification of an anaerobic digestion process model. A two-step (acidogenesis-methanization) mass-balance model has been considered. The model incorporates electrochemical equilibria in order to include the alkalinity, which has to play a central role in the related monitoring and control strategy of a treatment plant. The identification is based on a set of dynamical experiments designed to cover a wide spectrum of operating conditions that are likely to take place in the practical operation of the plant. A step by step identification procedure to estimate the model parameters is presented. The results of 70 days of experiments in a 1-m(3) fermenter are then used to validate the model.", "title": "" }, { "docid": "9a29bcb5ca21c33140a199763ab4bc5f", "text": "The Stadtpilot project aims at autonomous driving on Braunschweig's inner city ring road. For this purpose, an autonomous vehicle called “Leonie” has been developed. In October 2010, after two years of research, “Leonie's” abilities were presented in a public demonstration. This vehicle is one of the first worldwide to show the ability of driving autonomously in real urban traffic scenarios. This paper describes the legal issues and the homologation process for driving autonomously in public traffic in Braunschweig, Germany. It also dwells on the Safety Concept, the system architecture and current research activities.", "title": "" }, { "docid": "39030e91e22d222bf5f5e0eabbe02a38", "text": "Serratia marcescens has been recognized as an important cause of nosocomial and community-acquired infections. To our knowledge, we describe the first case of S. marcescens rhabdomyolysis, most probably related to acute cholecystitis and secondary bacteremia. The condition was successfully managed with levofloxacin. Keeping in mind the relevant morbidity and mortality associated with bacterial rhabdomyolysis, physicians should consider this possibility in patients with suspected or proven bacterial disease. We suggest S. marcescens should be regarded as a new causative agent of infectious rhabdomyolysis.", "title": "" }, { "docid": "38ae190a4a81a33dd818403723505f29", "text": "We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations. Key components are entity embeddings, a neural attention mechanism over local context windows, and a differentiable joint inference stage for disambiguation. Our approach thereby combines benefits of deep learning with more traditional approaches such as graphical models and probabilistic mention-entity maps. Extensive experiments show that we are able to obtain competitive or stateof-the-art accuracy at moderate computational costs.", "title": "" }, { "docid": "e7fc6335fc08f3c35dec43b48c4f70ca", "text": "The consumer concern on the originality of rice variety and the quality of rice leads to originality certification of rice by existing institutions. Technology helps human to perform evaluations of food grains using images of objects. This study developed a system as a tool to identify rice varieties. Identification process was performed by analyzing rice images using image processing. The analyzed features for identification consist of six color features, four morphological features, and two texture features. Classifier used LVQ neural network algorithm. Identification results using a combination of all features gave average accuracy of 70.3% with the highest classification accuracy level of 96.6% for Mentik Wangi and the lowest classification accuracy of 30% for Cilosari.", "title": "" }, { "docid": "98df132e28f5b5329f9c76142340a318", "text": "Agriculture sector is evolving with the advent of the information and communication technology. Efforts are being made to enhance the productivity and reduce losses by using the state of the art technology and equipment. As most of the farmers are unaware of the technology and latest practices, many expert systems have been developed in the world to facilitate the farmers. However, these expert systems rely on the stored knowledge base. We propose an expert system based on the Internet of Things (IoT) that will use the input data collected in real time. It will help to take proactive and preventive actions to minimize the losses due to diseases and insects/pests. Keywords—Internet of Things; Smart Agriculture; Cotton; Plant Diseases; Wireless Sensor Network", "title": "" }, { "docid": "f21850cde63b844e95db5b9916db1c30", "text": "Foreign Exchange (Forex) market is a complex and challenging task for prediction due to uncertainty movement of exchange rate. However, these movements over timeframe also known as historical Forex data that offered a generic repeated trend patterns. This paper uses the features extracted from trend patterns to model and predict the next day trend. Hidden Markov Models (HMMs) is applied to learn the historical trend patterns, and use to predict the next day movement trends. We use the 2011 Forex historical data of Australian Dollar (AUS) and European Union Dollar (EUD) against the United State Dollar (USD) for modeling, and the 2012 and 2013 Forex historical data for validating the proposed model. The experimental results show outperforms prediction result for both years.", "title": "" }, { "docid": "54bf53b120f5fa1c0cdfad80e5e264c9", "text": "To ensure safety in the construction of important metallic components for roadworthiness, it is necessary to check every component thoroughly using non-destructive testing. In last decades, X-ray testing has been adopted as the principal non-destructive testing method to identify defects within a component which are undetectable to the naked eye. Nowadays, modern computer vision techniques, such as deep learning and sparse representations, are opening new avenues in automatic object recognition in optical images. These techniques have been broadly used in object and texture recognition by the computer vision community with promising results in optical images. However, a comprehensive evaluation in X-ray testing is required. In this paper, we release a new dataset containing around 47.500 cropped X-ray images of 32 32 pixels with defects and no-defects in automotive components. Using this dataset, we evaluate and compare 24 computer vision techniques including deep learning, sparse representations, local descriptors and texture features, among others. We show in our experiments that the best performance was achieved by a simple LBP descriptor with a SVM-linear classifier obtaining 97% precision and 94% recall. We believe that the methodology presented could be used in similar projects that have to deal with automated detection of defects.", "title": "" }, { "docid": "9fdb52d61c5f6d278c656f75d22aa10d", "text": "BACKGROUND\nIncreasing demand for memory assessment in clinical settings in Iran, as well as the absence of a comprehensive and standardized task based upon the Persian culture and language, requires an appropriate culture- and language-specific version of the commonly used neuropsychological measure of verbal learning and memory, the Rey Auditory Verbal Learning Test (RAVLT).\n\n\nMETHODS\nThe Persian adapted version of the original RAVLT and two other alternate word lists were generated based upon criteria previously set for developing new word lists. A total of 90 subjects (three groups of 30 persons), aged 29.75±7.10 years, volunteered to participate in our study and were tested using the original word list. The practice effect was assessed by retesting the first and second groups using the same word list after 30 and 60 days, respectively. The test-retest reliability was evaluated by retesting the third group of participants twice using two new alternate word lists with an interval of 30 days.\n\n\nRESULTS\nThe re-administration of the same list after one or even two months led to significant practice effects. However, the use of alternate forms after a one-month delay yielded no significant difference across the forms. The first and second trials, as well as the total, immediate, and delayed recall scores showed the best reliability in retesting by the alternate list.\n\n\nCONCLUSION\nThe difference between the generated forms was minor, and it seems that the Persian version of the RAVLT is a reliable instrument for repeated neuropsychological testing as long as alternate forms are used and scores are carefully chosen.  ", "title": "" }, { "docid": "105b0c048852de36d075b1db929c1fa4", "text": "OBJECTIVES\nThis study was carried out to investigate the potential of titanium to induce hypersensitivity in patients chronically exposed to titanium-based dental or endoprosthetic implants.\n\n\nMETHODS\nFifty-six patients who had developed clinical symptoms after receiving titanium-based implants were tested in the optimized lymphocyte transformation test MELISA against 10 metals including titanium. Out of 56 patients, 54 were patch-tested with titanium as well as with other metals. The implants were removed in 54 patients (2 declined explantation), and 15 patients were retested in MELISA.\n\n\nRESULTS\nOf the 56 patients tested in MELISA, 21 (37.5%) were positive, 16 (28.6%) ambiguous, and 19 (33.9%) negative to titanium. In the latter group, 11 (57.9%) showed lymphocyte reactivity to other metals, including nickel. All 54 patch-tested patients were negative to titanium. Following removal of the implants, all 54 patients showed remarkable clinical improvement. In the 15 retested patients, this clinical improvement correlated with normalization in MELISA reactivity.\n\n\nCONCLUSION\nThese data clearly demonstrate that titanium can induce clinically-relevant hypersensitivity in a subgroup of patients chronically exposed via dental or endoprosthetic implants.", "title": "" }, { "docid": "2d404ea42ea4e4a0a20778c586c2490b", "text": "This paper presents a method for losslessly compressing multi-channel electroencephalogram signals. The Karhunen-Loeve transform is used to exploit the inter-correlation among the EEG channels. The transform is approximated using lifting scheme which results in a reversible realization under finite precision processing. An integer time-frequency transform is applied to further minimize the temporal redundancy", "title": "" }, { "docid": "5e0cff7f2b8e5aa8d112eacf2f149d60", "text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.", "title": "" }, { "docid": "1e2a64369279d178ee280ed7e2c0f540", "text": "We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.", "title": "" } ]
scidocsrr
6e69ec92774bbaa8842689871960d123
Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex
[ { "docid": "9d9714639d8f5c24bdb3f731f31c88d7", "text": "Controversy surrounds the function of the anterior cingulate cortex. Recent discussions about its role in behavioural control have centred on three main issues: its involvement in motor control, its proposed role in cognition and its relationship with the arousal/drive state of the organism. I argue that the overlap of these three domains is key to distinguishing the anterior cingulate cortex from other frontal regions, placing it in a unique position to translate intentions to actions.", "title": "" } ]
[ { "docid": "b4bc5ccbe0929261856d18272c47a3de", "text": "ROC analysis is increasingly being recognised as an important tool for evaluation and comparison of classifiers when the operating characteristics (i.e. class distribution and cost parameters) are not known at training time. Usually, each classifier is characterised by its estimated true and false positive rates and is represented by a single point in the ROC diagram. In this paper, we show how a single decision tree can represent a set of classifiers by choosing different labellings of its leaves, or equivalently, an ordering on the leaves. In this setting, rather than estimating the accuracy of a single tree, it makes more sense to use the area under the ROC curve (AUC) as a quality metric. We also propose a novel splitting criterion which chooses the split with the highest local AUC. To the best of our knowledge, this is the first probabilistic splitting criterion that is not based on weighted average impurity. We present experiments suggesting that the AUC splitting criterion leads to trees with equal or better AUC value, without sacrificing accuracy if a single labelling is chosen.", "title": "" }, { "docid": "86dd65bddeb01d4395b81cef0bc4f00e", "text": "Many people may see the development of software and hardware like different disciplines. However, there are great similarities between them that have been shown due to the appearance of extensions for general purpose programming languages for its use as hardware description languages. In this contribution, the approach proposed by the MyHDL package to use Python as an HDL is analyzed by making a comparative study. This study is based on the independent application of Verilog and Python based flows to the development of a real peripheral. The use of MyHDL has revealed to be a powerful and promising tool, not only because of the surprising results, but also because it opens new horizons towards the development of new techniques for modeling and verification, using the full power of one of the most versatile programming languages nowadays.", "title": "" }, { "docid": "1891bf842d446a7d323dc207b38ff5a9", "text": "We use linear programming techniques to obtain new upper bounds on the maximal squared minimum distance of spherical codes with fixed cardinality. Functions Qj(n, s) are introduced with the property that Qj(n, s) < 0 for some j > m iff the Levenshtein bound Lm(n, s) on A(n, s) = max{|W | : W is an (n, |W |, s) code} can be improved by a polynomial of degree at least m+1. General conditions on the existence of new bounds are presented. We prove that for fixed dimension n ≥ 5 there exist a constant k = k(n) such that all Levenshtein bounds Lm(n, s) for m ≥ 2k− 1 can be improved. An algorithm for obtaining new bounds is proposed and discussed.", "title": "" }, { "docid": "655ca54fc6867d05b7a17fe2f0c2905e", "text": "First of all, the railway traffic control process should ensure the safety. One of the current research areas is to ensure the security of data in the distributed rail traffic control systems using wireless networks. Emerging security threats are the result of, among others, an unknown number of users who may want to access the network, and an unknown number and type of equipment that can be connected to the network. It can cause potential threats resulting from unknown format of data and hacker attacks. In order to counteract these threats, it is necessary to apply safety functions. These functions include the use of data integrity code and encryption methods. Additionally, due to character of railway traffic control systems, it is necessary to keep time determinism while sending telegrams. Exceeding the maximum execution time of a cryptographic algorithm and creating too large blocks of data constitute two critical factors that should be taken into account while developing the system for data transmission. This could result in the inability to transmit data at a given throughput of the transmission channel (bandwidth) at a certain time. The paper presents analysis of delays resulting from the realization of safety functions: such as to prepare the data for transfer and their later decoding. Following block encryption algorithms have been analyzed: Blowfish, Twofish, DES, 3DES, AES-128, AES-192 and AES-256 for modes: ECB, CBC, PCBC, CFB, OFB, CTR and data integrity codes: MD-5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA-512/256. The obtained results can be very helpful in the development of new rail traffic control systems in which wireless data transmission is planned.", "title": "" }, { "docid": "7925100b85dce273b92f4d9f52253cda", "text": "Named entities such as people, locations, and organizations play a vital role in characterizing online content. They often reflect information of interest and are frequently used in search queries. Although named entities can be detected reliably from textual content, extracting relations among them is more challenging, yet useful in various applications (e.g., news recommending systems). In this paper, we present a novel model and system for learning semantic relations among named entities from collections of news articles. We model each named entity occurrence with sparse structured logistic regression, and consider the words (predictors) to be grouped based on background semantics. This sparse group LASSO approach forces the weights of word groups that do not influence the prediction towards zero. The resulting sparse structure is utilized for defining the type and strength of relations. Our unsupervised system yields a named entities’ network where each relation is typed, quantified, and characterized in context. These relations are the key to understanding news material over time and customizing newsfeeds for readers. Extensive evaluation of our system on articles from TIME magazine and BBC News shows that the learned relations correlate with static semantic relatedness measures like WLM, and capture the evolving relationships among named entities over time.", "title": "" }, { "docid": "12fa352b1e5912f67337e7dc42c3d4b1", "text": "A novel parallel VLSI architecture is proposed in order to improve the performance of the H.265/HEVC deblocking filter. The overall computation is pipelined, and a new parallel-zigzag processing order is introduced to achieve high throughput. The processing order of the filter is efficiently rearranged to process the horizontal edges and vertical edges at the same time. The proposed H.265/HEVC deblocking filter architecture improves the parallelism by dissolving the data dependency between the adjacent filtering operations. Our design is also compatible with H.264/AVC. Experimental results demonstrate that our architecture shows the best performance compared with other architectures known so far at the expense of the slightly increased gate count. We improve the performance by 52.3%, while the area is increased by 25.8% compared with the previously known best architecture for H.264/AVC. The operating clock frequency of our design is 226 MHz in TSMC LVT 65 process. The proposed design delivers the performance to process 1080p HD at 60 fps.", "title": "" }, { "docid": "3f30c821132e07838de325c4f2183f84", "text": "This paper argues for the recognition of important experiential aspects of consumption. Specifically, a general framework is constructed to represent typical consumer behavior variables. Based on this paradigm, the prevailing information processing model is contrasted with an experiential view that focuses on the symbolic, hedonic, and esthetic nature of consumption. This view regards the consumption experience as a phenomenon directed toward the pursuit of fantasies, feelings, and fun.", "title": "" }, { "docid": "2496fa63868717ce2ed56c1777c4b0ed", "text": "Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡", "title": "" }, { "docid": "2a56585a288405b9adc7d0844980b8bf", "text": "In this paper we propose the first exact solution to the problem of estimating the 3D room layout from a single image. This problem is typically formulated as inference in a Markov random field, where potentials count image features (e.g ., geometric context, orientation maps, lines in accordance with vanishing points) in each face of the layout. We present a novel branch and bound approach which splits the label space in terms of candidate sets of 3D layouts, and efficiently bounds the potentials in these sets by restricting the contribution of each individual face. We employ integral geometry in order to evaluate these bounds in constant time, and as a consequence, we not only obtain the exact solution, but also in less time than approximate inference tools such as message-passing. We demonstrate the effectiveness of our approach in two benchmarks and show that our bounds are tight, and only a few evaluations are necessary.", "title": "" }, { "docid": "6974bf94292b51fc4efd699c28c90003", "text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.", "title": "" }, { "docid": "c508f62dfd94d3205c71334638790c54", "text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).", "title": "" }, { "docid": "36fca3bd6a23b2f99438fe07ec0f0b9f", "text": "Best management practices (BMPs) have been widely used to address hydrology and water quality issues in both agricultural and urban areas. Increasing numbers of BMPs have been studied in research projects and implemented in watershed management projects, but a gap remains in quantifying their effectiveness through time. In this paper, we review the current knowledge about BMP efficiencies, which indicates that most empirical studies have focused on short-term efficiencies, while few have explored long-term efficiencies. Most simulation efforts that consider BMPs assume constant performance irrespective of ages of the practices, generally based on anticipated maintenance activities or the expected performance over the life of the BMP(s). However, efficiencies of BMPs likely change over time irrespective of maintenance due to factors such as degradation of structures and accumulation of pollutants. Generally, the impacts of BMPs implemented in water quality protection programs at watershed levels have not been as rapid or large as expected, possibly due to overly high expectations for practice long-term efficiency, with BMPs even being sources of pollutants under some conditions and during some time periods. The review of available datasets reveals that current data are limited regarding both short-term and long-term BMP efficiency. Based on this review, this paper provides suggestions regarding needs and opportunities. Existing practice efficiency data need to be compiled. New data on BMP efficiencies that consider important factors, such as maintenance activities, also need to be collected. Then, the existing and new data need to be analyzed. Further research is needed to create a framework, as well as modeling approaches built on the framework, to simulate changes in BMP efficiencies with time. The research community needs to work together in addressing these needs and opportunities, which will assist decision makers in formulating better decisions regarding BMP implementation in watershed management projects.", "title": "" }, { "docid": "fac03559daded831095dfc9e083b794d", "text": "Multi-label classification is prevalent in many real-world applications, where each example can be associated with a set of multiple labels simultaneously. The key challenge of multi-label classification comes from the large space of all possible label sets, which is exponential to the number of candidate labels. Most previous work focuses on exploiting correlations among different labels to facilitate the learning process. It is usually assumed that the label correlations are given beforehand or can be derived directly from data samples by counting their label co-occurrences. However, in many real-world multi-label classification tasks, the label correlations are not given and can be hard to learn directly from data samples within a moderate-sized training set. Heterogeneous information networks can provide abundant knowledge about relationships among different types of entities including data samples and class labels. In this paper, we propose to use heterogeneous information networks to facilitate the multi-label classification process. By mining the linkage structure of heterogeneous information networks, multiple types of relationships among different class labels and data samples can be extracted. Then we can use these relationships to effectively infer the correlations among different class labels in general, as well as the dependencies among the label sets of data examples inter-connected in the network. Empirical studies on real-world tasks demonstrate that the performance of multi-label classification can be effectively boosted using heterogeneous information net- works.", "title": "" }, { "docid": "516a2ec7c1dc332a4b375be7c11ba48e", "text": "Due to Evolution of internet and social media, every internet user expresses his opinion and views on the web. These views are both regarding day-to-day transaction and international issues as well. W ith the rapid growth of web technology internet has become the place for online learning and exchange ideas also. With this informat ion other users make up their mind about a particular service, product or organization. This gives birth to a huge opinion data available online in the form of on -line rev iew site, twitter, face book and personal blogs etc. This paper focuses on review of Opinion mining and sentiment analysis as it is the process of examining the text (opin ion or review) about a topic written in a natural language and classify them as positive, negative or neutral based on the humans sentiments involved in it. In this paper we have reviewed papers of last ten years to bring the research done in the field of sentiment analysis at a common platfo rm. It includes sentiment analysis tools, levels of sentiment analysis, its challenges a nd issues thus it will be very useful for the new researchers to have all information at a g lance.", "title": "" }, { "docid": "3476f91f068102ccf35c3855102f4d1b", "text": "Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence application areas, such as, nuclear reactor safety, underground storage of nuclear waste, and safety of nuclear weapons. Although the terminology is not uniform across engineering disciplines, code verification deals with the assessment of the reliability of the software coding and solution verification deals with the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. Some fields, such as nuclear reactor safety, place little emphasis on code verification benchmarks and great emphasis on validation benchmarks that are closely related to actual reactors operating near safety-critical conditions. This paper proposes recommendations for the optimum design and use of code verification benchmarks based on classical analytical solutions, manufactured solutions, and highly accurate numerical solutions. It is believed that these benchmarks will prove useful to both in-house developed codes, as well as commercially licensed codes. In addition, this paper proposes recommendations for the design and use of validation benchmarks with emphasis on careful design of building-block experiments, estimation of experiment measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that predictive capability of a computational model is built on both the measurement of achievement in V&V, as well as how closely related are the V&V benchmarks to the actual application of interest, e.g., the magnitude of extrapolation beyond a validation benchmark to a complex engineering system of interest.", "title": "" }, { "docid": "14b616d5737369e3eecc7da82e97f0e8", "text": "This paper presents a novel algorithm which uses compact hash bits to greatly improve the efficiency of non-linear kernel SVM in very large scale visual classification problems. Our key idea is to represent each sample with compact hash bits, over which an inner product is defined to serve as the surrogate of the original nonlinear kernels. Then the problem of solving the nonlinear SVM can be transformed into solving a linear SVM over the hash bits. The proposed Hash-SVM enjoys dramatic storage cost reduction owing to the compact binary representation, as well as a (sub-)linear training complexity via linear SVM. As a critical component of Hash-SVM, we propose a novel hashing scheme for arbitrary non-linear kernels via random subspace projection in reproducing kernel Hilbert space. Our comprehensive analysis reveals a well behaved theoretic bound of the deviation between the proposed hashing-based kernel approximation and the original kernel function. We also derive requirements on the hash bits for achieving a satisfactory accuracy level. Several experiments on large-scale visual classification benchmarks are conducted, including one with over 1 million images. The results show that Hash-SVM greatly reduces the computational complexity (more than ten times faster in many cases) while keeping comparable accuracies.", "title": "" }, { "docid": "1ed9f257129a45388fcf976b87e37364", "text": "Mobile cloud computing is an extension of cloud computing that allow the users to access the cloud service via their mobile devices. Although mobile cloud computing is convenient and easy to use, the security challenges are increasing significantly. One of the major issues is unauthorized access. Identity Management enables to tackle this issue by protecting the identity of users and controlling access to resources. Although there are several IDM frameworks in place, they are vulnerable to attacks like timing attacks in OAuth, malicious code attack in OpenID and huge amount of information leakage when user’s identity is compromised in Single Sign-On. Our proposed framework implicitly authenticates a user based on user’s typing behavior. The authentication information is encrypted into homomorphic signature before being sent to IDM server and tokens are used to authorize users to access the cloud resources. Advantages of our proposed framework are: user’s identity protection and prevention from unauthorized access.", "title": "" }, { "docid": "b1272039194d07ff9b7568b7f295fbfb", "text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.", "title": "" }, { "docid": "ea87229e46fd049930c75a9d5187fd6c", "text": "Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.", "title": "" } ]
scidocsrr
14ce8d3f45975148c11e1ea05d01b5c8
Learning Policies to Forecast Agent Behavior with Visual Data
[ { "docid": "e49f04ff71d0718eff9a3a6005b2a689", "text": "Energy-Based Models (EBMs) capture dependencies between v ariables by associating a scalar energy to each configuration of the variab les. Inference consists in clamping the value of observed variables and finding config urations of the remaining variables that minimize the energy. Learning consi sts in finding an energy function in which observed configurations of the variables a re given lower energies than unobserved ones. The EBM approach provides a common the re ical framework for many learning models, including traditional discr minative and generative approaches, as well as graph-transformer networks, co nditi nal random fields, maximum margin Markov networks, and several manifold learn ing methods. Probabilistic models must be properly normalized, which so metimes requires evaluating intractable integrals over the space of all poss ible variable configurations. Since EBMs have no requirement for proper normalizat ion, his problem is naturally circumvented. EBMs can be viewed as a form of non-p robabilistic factor graphs, and they provide considerably more flexibility in th e design of architectures and training criteria than probabilistic approaches .", "title": "" } ]
[ { "docid": "5c2297cf5892ebf9864850dc1afe9cbf", "text": "In this paper, we propose a novel technique for generating images in the 3D domain from images with high degree of geometrical transformations. By coalescing two popular concurrent methods that have seen rapid ascension to the machine learning zeitgeist in recent years: GANs (Goodfellow et. al.) and Capsule networks (Sabour, Hinton et. al.) we present: CapsGAN. We show that CapsGAN performs better than or equal to traditional CNN based GANs in generating images with high geometric transformations using rotated MNIST. In the process, we also show the efficacy of using capsules architecture in the GANs domain. Furthermore, we tackle the Gordian Knot in training GANs the performance control and training stability by experimenting with using Wasserstein distance (gradient clipping, penalty) and Spectral Normalization. The experimental findings of this paper should propel the application of capsules and GANs in the still exciting and nascent domain of 3D image generation, and plausibly video (frame) generation.", "title": "" }, { "docid": "dd11a04de8288feba2b339cca80de41c", "text": "A methodology for the automatic design optimization of analog circuits is presented. A non-fixed topology approach is followed. A symbolic simulator, called ISAAC, generates an analytic AC model for any analog circuit, time-continuous or time-discrete, CMOS or bipolar. ISAAC's expressions can be fully symbolic or mixed numeric-symbolic, exact or simplified. The model is passed to the design optimization program OPTIMAN. For a user selected circuit topology, the independent design variables are automatically extracted and OPTIMAN sizes all elements to satisfy the performance constraints, thereby optimizing a user defined design objective. The optimization algorithm is simulated annealing. Practical examples show that OPTIMAN quickly designs analog circuits, closely meeting the specifications, and that it is a flexible and reliable design and exploration tool.", "title": "" }, { "docid": "a2df6d7e35323f02026b180270dcf205", "text": "In an early study, a thermal model has been developed, using finite element simulations, to study the temperature field and response in the electron beam additive manufacturing (EBAM) process, with an ability to simulate single pass scanning only. In this study, an investigation was focused on the initial thermal conditions, redesigned to analyze a critical substrate thickness, above which the preheating temperature penetration will not be affected. Extended studies are also conducted on more complex process configurations, such as multi-layer raster scanning, which are close to actual operations, for more accurate representations of the transient thermal phenomenon.", "title": "" }, { "docid": "e2427ff836c8b83a75d8f7074656a025", "text": "With the rapid growth of smartphone and tablet users, Device-to-Device (D2D) communications have become an attractive solution for enhancing the performance of traditional cellular networks. However, relevant security issues involved in D2D communications have not been addressed yet. In this paper, we investigate the security requirements and challenges for D2D communications, and present a secure and efficient key agreement protocol, which enables two mobile devices to establish a shared secret key for D2D communications without prior knowledge. Our approach is based on the Diffie-Hellman key agreement protocol and commitment schemes. Compared to previous work, our proposed protocol introduces less communication and computation overhead. We present the design details and security analysis of the proposed protocol. We also integrate our proposed protocol into the existing Wi-Fi Direct protocol, and implement it using Android smartphones.", "title": "" }, { "docid": "e591165d8e141970b8263007b076dee1", "text": "Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record", "title": "" }, { "docid": "77d616dc746e74db02215dcf2fdb6141", "text": "It is almost a quarter of a century since the launch in 1968 of NASA's Pioneer 9 spacecraft on the first mission into deep-space that relied on coding to enhance communications on the critical downlink channel. [The channel code used was a binary convolutional code that was decoded with sequential decoding--we will have much to say about this code in the sequel.] The success of this channel coding system had repercussions that extended far beyond NASA's space program. It is no exaggeration to say that the Pioneer 9 mission provided communications engineers with the first incontrovertible demonstration of the practical utility of channel coding techniques and thereby paved the way for the successful application of coding to many other channels.", "title": "" }, { "docid": "936d92f1afcab16a9dfe24b73d5f986d", "text": "Active vision techniques use programmable light sources, such as projectors, whose intensities can be controlled over space and time. We present a broad framework for fast active vision using Digital Light Processing (DLP) projectors. The digital micromirror array (DMD) in a DLP projector is capable of switching mirrors “on” and “off” at high speeds (10/s). An off-the-shelf DLP projector, however, effectively operates at much lower rates (30-60Hz) by emitting smaller intensities that are integrated over time by a sensor (eye or camera) to produce the desired brightness value. Our key idea is to exploit this “temporal dithering” of illumination, as observed by a high-speed camera. The dithering encodes each brightness value uniquely and may be used in conjunction with virtually any active vision technique. We apply our approach to five well-known problems: (a) structured light-based range finding, (b) photometric stereo, (c) illumination de-multiplexing, (d) high frequency preserving motion-blur and (e) separation of direct and global scene components, achieving significant speedups in performance. In all our methods, the projector receives a single image as input whereas the camera acquires a sequence of frames.", "title": "" }, { "docid": "873a24a210aa57fc22895500530df2ba", "text": "We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic system building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs. assumptions, computation vs. embodiment, and planning vs. feedback. 2) To understand which region of each spectrum most adequately addresses which robotic problem, we must explore the full spectrum of possible approaches. To achieve this, our community should agree on key aspects that characterize the solution space of robotic systems. 3) For manipulation problems in unstructured environments, certain regions of each spectrum match the problem most adequately, and should be exploited further. This is supported by the fact that our solution deviated from the majority of the other challenge entries along each of the spectra.", "title": "" }, { "docid": "34baa9b0e77f6ef290ab54889edf293d", "text": "Concurrent with the enactment of metric conversion legislation by the U. S. Congress in 1975, the Motor and Generator Section of the National Electrical Manufacturer Association (NEMA) voted to proceed with the development of a Guide for the Development of Metric Standards for Motors and Generators, referred to as the \" IMetric Guide\" or \"the Guide.\" The first edition was published in 1978, followed by a second, more extensive, edition in November 1980. A summary of the Metric Guide, is given, including comparison with NEMA and International Electrotechnical Commission (IEC) standards.", "title": "" }, { "docid": "d6e9c09af35c5c661870d456a1dfddb5", "text": "We present NMT-Keras, a flexible toolkit for training deep learning models, which puts a particular emphasis on thedevelopment of advanced applications of neuralmachine translation systems, such as interactive-predictive translation protocols and long-term adaptation of the translation system via continuous learning. NMT-Keras is based on an extended version of the popular Keras library, and it runs on Theano and Tensorflow. State-of-the-art neural machine translation models are deployed and used following the high-level framework provided by Keras. Given its high modularity and flexibility, it also has been extended to tackle different problems, such as image and video captioning, sentence classification and visual question answering.", "title": "" }, { "docid": "51a859f71bd2ec82188826af18204f02", "text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.", "title": "" }, { "docid": "46fa91ce587d094441466a7cbe5c5f07", "text": "Automatic facial expression analysis is an interesting and challenging problem which impacts important applications in many areas such as human-computer interaction and data-driven animation. Deriving effective facial representative features from face images is a vital step towards successful expression recognition. In this paper, we evaluate facial representation based on statistical local features called Local Binary Patterns (LBP) for facial expression recognition. Simulation results illustrate that LBP features are effective and efficient for facial expression recognition. A real-time implementation of the proposed approach is also demonstrated which can recognize expressions accurately at the rate of 4.8 frames per second.", "title": "" }, { "docid": "2c4db5a69fd0d23cfccd927b87ecc795", "text": "Current paper examines the management accounting practices of Estonian manufacturing companies, exploring the main impacts on them within a contingency theory framework. The methodology comprises an analysis of 62 responses to a postal questionnaire survey carried out among the largest Estonian manufacturing companies. On the one hand, the present research aims to confirm earlier findings related to the ‘contingent factors’ that influence management accounting, on the other, to identify possible new factors, such as, the legal accounting environment and shortage of properly qualified accountants. 1 University of Tartu, Faculty of Economics and Business Administration, Ass. Prof. of Accounting Department, PhD, E-mail: toom@mtk.ut.ee 2 University of Tartu, Faculty of Economics and Business Administration, Lecturer of Accounting Department, PhD student, E-mail: kertu@mtk.ut.ee Acknowledgements: The authors are grateful to prof. Robert Chenhall from Monash University for his assistance and to visiting prof. Gary Cunningham from Stuttgart University of Technology for his constructive comments. The financial support from the Estonian Science Foundation is herein acknowledged with gratitude.", "title": "" }, { "docid": "d815e254478a9503f1063b5595f48e0f", "text": "•We present an approach to this unpaired image captioning problem by language pivoting. •Our method can effectively capture the characteristics of an image captioner from the pivot language (Chinese) and align it to the target language (English) using another pivot-target (Chinese-English) parallel corpus. •Quantitative comparisons against several baseline approaches demonstrate the effectiveness of our method.", "title": "" }, { "docid": "4455233571d9c4fca8cfa2a5eb8ef22f", "text": "This article summarizes the studies of the mechanism of electroacupuncture (EA) in the regulation of the abnormal function of hypothalamic-pituitary-ovarian axis (HPOA) in our laboratory. Clinical observation showed that EA with the effective acupoints could cure some anovulatory patients in a highly effective rate and the experimental results suggested that EA might regulate the dysfunction of HPOA in several ways, which means EA could influence some gene expression of brain, thereby, normalizing secretion of some hormones, such as GnRH, LH and E2. The effects of EA might possess a relative specificity on acupoints.", "title": "" }, { "docid": "bfb5ab3f17045856db6da616f5d82609", "text": "This study examined cognitive distortions and coping styles as potential mediators for the effects of mindfulness meditation on anxiety, negative affect, positive affect, and hope in college students. Our pre- and postintervention design had four conditions: control, brief meditation focused on attention, brief meditation focused on loving kindness, and longer meditation combining both attentional and loving kindness aspects of mindfulness. Each group met weekly over the course of a semester. Longer combined meditation significantly reduced anxiety and negative affect and increased hope. Changes in cognitive distortions mediated intervention effects for anxiety, negative affect, and hope. Further research is needed to determine differential effects of types of meditation.", "title": "" }, { "docid": "b101ab8f2242e85ccd7948b0b3ffe9b4", "text": "This paper describes a language-independent model for multi-class sentiment analysis using a simple neural network architecture of five layers (Embedding, Conv1D, GlobalMaxPooling and two Fully-Connected). The advantage of the proposed model is that it does not rely on language-specific features such as ontologies, dictionaries, or morphological or syntactic pre-processing. Equally important, our system does not use pre-trained word2vec embeddings which can be costly to obtain and train for some languages. In this research, we also demonstrate that oversampling can be an effective approach for correcting class imbalance in the data. We evaluate our methods on three publicly available datasets for English, German and Arabic, and the results show that our system’s performance is comparable to, or even better than, the state of the art for these datasets. We make our source-code publicly available.", "title": "" }, { "docid": "13153476fac37dd879c34907f7db5317", "text": "Lean deveLopment is a product development paradigm with an endto-end focus on creating value for the customer, eliminating waste, optimizing value streams, empowering people, and continuously improving (see Figure 11). Lean thinking has penetrated many industries. It was first used in manufacturing, with clear goals to empower teams, reduce waste, optimize work streams, and above all keep market and customer needs as the primary decision driver.2 This IEEE Software special issue addresses lean software development as opposed to management or manufacturing theories. In that context, we sought to address some key questions: What design principles deliver value, and how are they introduced to best manage change?", "title": "" }, { "docid": "17beea6923e7376369691f18b0ca63e2", "text": "This paper investigates the effect of avatar realism on embodiment and social interactions in Virtual Reality (VR). We compared abstract avatar representations based on a wooden mannequin with high fidelity avatars generated from photogrammetry 3D scan methods. Both avatar representations were alternately applied to participating users and to the virtual counterpart in dyadic social encounters to examine the impact of avatar realism on self-embodiment and social interaction quality. Users were immersed in a virtual room via a head mounted display (HMD). Their full-body movements were tracked and mapped to respective movements of their avatars. Embodiment was induced by presenting the users' avatars to themselves in a virtual mirror. Afterwards they had to react to a non-verbal behavior of a virtual interaction partner they encountered in the virtual space. Several measures were taken to analyze the effect of the appearance of the users' avatars as well as the effect of the appearance of the others' avatars on the users. The realistic avatars were rated significantly more human-like when used as avatars for the others and evoked a stronger acceptance in terms of virtual body ownership (VBO). There also was some indication of a potential uncanny valley. Additionally, there was an indication that the appearance of the others' avatars impacts the self-perception of the users.", "title": "" }, { "docid": "372ce38b93c2b3234281e2806aa3bc76", "text": "Sorting a list of input numbers is one of the most fundamental problems in the field of computer science in general and high-throughput database applications in particular. Although literature abounds with various flavors of sorting algorithms, different architectures call for customized implementations to achieve faster sorting times. This paper presents an efficient implementation and detailed analysis of MergeSort on current CPU architectures. Our SIMD implementation with 128-bit SSE is 3.3X faster than the scalar version. In addition, our algorithm performs an efficient multiway merge, and is not constrained by the memory bandwidth. Our multi-threaded, SIMD implementation sorts 64 million floating point numbers in less than 0.5 seconds on a commodity 4-core Intel processor. This measured performance compares favorably with all previously published results. Additionally, the paper demonstrates performance scalability of the proposed sorting algorithm with respect to certain salient architectural features of modern chip multiprocessor (CMP) architectures, including SIMD width and core-count. Based on our analytical models of various architectural configurations, we see excellent scalability of our implementation with SIMD width scaling up to 16X wider than current SSE width of 128-bits, and CMP core-count scaling well beyond 32 cores. Cycle-accurate simulation of Intel’s upcoming x86 many-core Larrabee architecture confirms scalability of our proposed algorithm.", "title": "" } ]
scidocsrr
940646d5e600199f3d0f3f48495c6748
Are Latent Sentence Vectors Cross-Linguistically Invariant ?
[ { "docid": "785702d7102fbc3b9089d0daaa0ad814", "text": "Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bagof-words generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.", "title": "" }, { "docid": "bc2cc54a7b01fa7a7c3bf7a0f88bc899", "text": "Usually bilingual word vectors are trained “online”. Mikolov et al. (2013a) showed they can also be found “offline”; whereby two pre-trained embeddings are aligned with a linear transformation, using dictionaries compiled from expert knowledge. In this work, we prove that the linear transformation between two spaces should be orthogonal. This transformation can be obtained using the singular value decomposition. We introduce a novel “inverted softmax” for identifying translation pairs, with which we improve the precision @1 of Mikolov’s original mapping from 34% to 43%, when translating a test set composed of both common and rare English words into Italian. Orthogonal transformations are more robust to noise, enabling us to learn the transformation without expert bilingual signal by constructing a “pseudo-dictionary” from the identical character strings which appear in both languages, achieving 40% precision on the same test set. Finally, we extend our method to retrieve the true translations of English sentences from a corpus of 200k Italian sentences with a precision @1 of 68%.", "title": "" } ]
[ { "docid": "f6f957790ab0655fb28bed62b08b7be3", "text": "According to the signal hypothesis, a signal sequence, once having initiated export of a growing protein chain across the rough endoplasmic reticulum, is cleaved from the mature protein at a specific site. It has long been known that some part of the cleavage specificity resides in the last residue of the signal sequence, which invariably is one with a small, uncharged side-chain, but no further specific patterns of amino acids near the point of cleavage have been discovered so far. In this paper, some such patterns, based on a sample of 78 eukaryotic signal sequences, are presented and discussed, and a first attempt at formulating rules for the prediction of cleavage sites is made.", "title": "" }, { "docid": "0f2d6a8ce07258658f24fb4eec006a02", "text": "Dynamic bandwidth allocation in passive optical networks presents a key issue for providing efficient and fair utilization of the PON upstream bandwidth while supporting the QoS requirements of different traffic classes. In this article we compare the typical characteristics of DBA, such as bandwidth utilization, delay, and jitter at different traffic loads, within the two major standards for PONs, Ethernet PON and gigabit PON. A particular PON standard sets the framework for the operation of DBA and the limitations it faces. We illustrate these differences between EPON and GPON by means of simulations for the two standards. Moreover, we consider the evolution of both standards to their next-generation counterparts with the bit rate of 10 Gb/s and the implications to the DBA. A new simple GPON DBA algorithm is used to illustrate GPON performance. It is shown that the length of the polling cycle plays a crucial but different role for the operation of the DBA within the two standards. Moreover, only minor differences regarding DBA for current and next-generation PONs were found.", "title": "" }, { "docid": "3833e548f316f7c4e93cb49ec278379e", "text": "Computational thinking (CT) is increasingly seen as a core literacy skill for the modern world on par with the longestablished skills of reading, writing, and arithmetic. To promote the learning of CT at a young age we capitalized on children's interest in play. We designed RabBit EscApe, a board game that challenges children, ages 610, to orient tangible, magnetized manipulatives to complete or create paths. We also ran an informal study to investigate the effectiveness of the game in fostering children's problemsolving capacity during collaborative game play. We used the results to inform our instructional interaction design that we think will better support the learning activities and help children hone the involved CT skills. Overall, we believe in the power of such games to challenge children to grow their understanding of CT in a focused and engaging activity.", "title": "" }, { "docid": "69b0c5a4a3d5fceda5e902ec8e0479bb", "text": "Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.", "title": "" }, { "docid": "1db450f3e28907d6940c87d828fc1566", "text": "The task of colorizing black and white images has previously been explored for natural images. In this paper we look at the task of colorization on a different domain: webtoons. To our knowledge this type of dataset hasn't been used before. Webtoons are usually produced in color thus they make a good dataset for analyzing different colorization models. Comics like webtoons also present some additional challenges over natural images, such as occlusion by speech bubbles and text. First we look at some of the previously introduced models' performance on this task and suggest modifications to address their problems. We propose a new model composed of two networks; one network generates sparse color information and a second network uses this generated color information as input to apply color to the whole image. These two networks are trained end-to-end. Our proposed model solves some of the problems observed with other architectures, resulting in better colorizations.", "title": "" }, { "docid": "17c8766c5fcc9b6e0d228719291dcea5", "text": "In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.", "title": "" }, { "docid": "6ca8dffc616d38bc528bf830a970d97f", "text": "The Cart-Inverted Pendulum System (CIPS) is a classical benchmark control problem. Its dynamics resembles with that of many real world systems of interest like missile launchers, pendubots, human walking and segways and many more. The control of this system is challenging as it is highly unstable, highly non-linear, non-minimum phase system and underactuated. Further, the physical constraints on the track position control voltage etc. also pose complexity in its control design. The thesis begins with the description of the CIPS together with hardware setup used for research, its dynamics in state space and transfer function models. In the past, a lot of research work has been directed to develop control strategies for CIPS. But, very little work has been done to validate the developed design through experiments. Also robustness margins of the developed methods have not been analysed. Thus, there lies an ample opportunity to develop controllers and study the cart-inverted pendulum controlled system in real-time. The objective of this present work is to stabilize the unstable CIPS within the different physical constraints such as in track length and control voltage. Also, simultaneously ensure good robustness. A systematic iterative method for the state feedback design by choosing weighting matrices key to the Linear Quadratic Regulator (LQR) design is presented. But, this yields oscillations in cart position. The Two-Loop-PID controller yields good robustness, and superior cart responses. A sub-optimal LQR based state feedback subjected to H∞ constraints through Linear Matrix Inequalities (LMIs) is solved and it is observed from the obtained results that a good stabilization result is achieved. Non-linear cart friction is identified using an exponential cart friction and is modeled as a plant matrix uncertainty. It has been observed that modeling the cart friction as above has led to improved cart response. Subsequently an integral sliding mode controller has been designed for the CIPS. From the obtained simulation and experiments it is seen that the ISM yields good robustness towards the output channel gain perturbations. The efficacies of the developed techniques are tested both in simulation and experimentation. It has been also observed that the Two-Loop PID Controller yields overall satisfactory response in terms of superior cart position and robustness. In the event of sensor fault the ISM yields best performance out of all the techniques.", "title": "" }, { "docid": "d5adbe2a074711bdfcc5f1840f27bac3", "text": "Graph kernels have emerged as a powerful tool for graph comparison. Most existing graph kernels focus on local properties of graphs and ignore global structure. In this paper, we compare graphs based on their global properties as these are captured by the eigenvectors of their adjacency matrices. We present two algorithms for both labeled and unlabeled graph comparison. These algorithms represent each graph as a set of vectors corresponding to the embeddings of its vertices. The similarity between two graphs is then determined using the Earth Mover’s Distance metric. These similarities do not yield a positive semidefinite matrix. To address for this, we employ an algorithm for SVM classification using indefinite kernels. We also present a graph kernel based on the Pyramid Match kernel that finds an approximate correspondence between the sets of vectors of the two graphs. We further improve the proposed kernel using the Weisfeiler-Lehman framework. We evaluate the proposed methods on several benchmark datasets for graph classification and compare their performance to state-of-the-art graph kernels. In most cases, the proposed algorithms outperform the competing methods, while their time complexity remains very attractive.", "title": "" }, { "docid": "80344cbe3abe629a8383679398ea9b4b", "text": "Deception has been extensively studied in many disciplines in social science. With the increasing use of instant messaging (IM) in both informal communication and performing tasks in work place, deception in IM is emerging as an important issue. In this study, we aimed to explore the online behavior of deception in a group IM setting. The empirical results from triadic groups showed that two types of online behavior under investigation could significantly differentiate deceivers from truth-tellers. The findings can potentially broaden our knowledge of behavioral indicators of deception in human interaction and improve deception detection in cyberspace.", "title": "" }, { "docid": "fa1575eeeb9ab02ce7d4dc6a3e1ffc14", "text": "A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases.", "title": "" }, { "docid": "f72665520c503cd5664439efbe088513", "text": "We investigate repeated matrix games with stochastic players as a microcosm for studying dynamic, multi-agent interactions using the Stochastic Direct Reinforcement (SDR) policy gradient algorithm. SDR is a generalization of Recurrent Reinforcement Learning (RRL) that supports stochastic policies. Unlike other RL algorithms, SDR and RRL use recurrent policy gradients to properly address temporal credit assignment resulting from recurrent structure. Our main goals in this paper are to (1) distinguish recurrent memory from standard, non-recurrent memory for policy gradient RL, (2) compare SDR with Q-type learning methods for simple games, (3) distinguish reactive from endogenous dynamical agent behavior and (4) explore the use of recurrent learning for interacting, dynamic agents. We find that SDR players learn much faster and hence outperform recently-proposed Q-type learners for the simple game Rock, Paper, Scissors (RPS). With more complex, dynamic SDR players and opponents, we demonstrate that recurrent representations and SDR’s recurrent policy gradients yield better performance than non-recurrent players. For the Iterated Prisoners Dilemma, we show that non-recurrent SDR agents learn only to defect (Nash equilibrium), while SDR agents with recurrent gradients can learn a variety of interesting behaviors, including cooperation.", "title": "" }, { "docid": "1f06f0b370a827d92dc675f33feaa524", "text": "Cognitive radio networks (CRNs) have emerged as an essential technology to enable dynamic and opportunistic spectrum access which aims to exploit underutilized licensed channels to solve the spectrum scarcity problem. Despite the great benefits that CRNs offer in terms of their ability to improve spectrum utilization efficiency, they suffer from user location privacy issues. Knowing that their whereabouts may be exposed can discourage users from joining and participating in the CRNs, thereby potentially hindering the adoption and deployment of this technology in future generation networks. The location information leakage issue in the CRN context has recently started to gain attention from the research community due to its importance, and several research efforts have been made to tackle it. However, to the best of our knowledge, none of these works have tried to identify the vulnerabilities that are behind this issue or discuss the approaches that could be deployed to prevent it. In this paper, we try to fill this gap by providing a comprehensive survey that investigates the various location privacy risks and threats that may arise from the different components of this CRN technology, and explores the different privacy attacks and countermeasure solutions that have been proposed in the literature to cope with this location privacy issue. We also discuss some open research problems, related to this issue, that need to be overcome by the research community to take advantage of the benefits of this key CRN technology without having to sacrifice the users’ privacy.", "title": "" }, { "docid": "112ec676f74c22393d06bc23eaae50d8", "text": "Multi-user multiple-input multiple-output (MU-MIMO) is the latest communication technology that promises to linearly increase the wireless capacity by deploying more antennas on access points (APs). However, the large number of MIMO antennas will generate a huge amount of digital signal samples in real time. This imposes a grand challenge on the AP design by multiplying the computation and the I/O requirements to process the digital samples. This paper presents BigStation, a scalable architecture that enables realtime signal processing in large-scale MIMO systems which may have tens or hundreds of antennas. Our strategy to scale is to extensively parallelize the MU-MIMO processing on many simple and low-cost commodity computing devices. Our design can incrementally support more antennas by proportionally adding more computing devices. To reduce the overall processing latency, which is a critical constraint for wireless communication, we parallelize the MU-MIMO processing with a distributed pipeline based on its computation and communication patterns. At each stage of the pipeline, we further use data partitioning and computation partitioning to increase the processing speed. As a proof of concept, we have built a BigStation prototype based on commodity PC servers and standard Ethernet switches. Our prototype employs 15 PC servers and can support real-time processing of 12 software radio antennas. Our results show that the BigStation architecture is able to scale to tens to hundreds of antennas. With 12 antennas, our BigStation prototype can increase wireless capacity by 6.8x with a low mean processing delay of 860μs. While this latency is not yet low enough for the 802.11 MAC, it already satisfies the real-time requirements of many existing wireless standards, e.g., LTE and WCDMA.", "title": "" }, { "docid": "561e9f599e5dc470ca6f57faa62ebfce", "text": "Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a dynamic representation space and use it for oneshot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.", "title": "" }, { "docid": "6f609fef5fd93e776fd7d43ed91fd4a8", "text": "Wandering is among the most frequent, problematic, and dangerous behaviors for elders with dementia. Frequent wanderers likely suffer falls and fractures, which affect the safety and quality of their lives. In order to monitor outdoor wandering of elderly people with dementia, this paper proposes a real-time method for wandering detection based on individuals' GPS traces. By representing wandering traces as loops, the problem of wandering detection is transformed into detecting loops in elders' mobility trajectories. Specifically, the raw GPS data is first preprocessed to remove noisy and crowded points by performing an online mean shift clustering. A novel method called θ_WD is then presented that is able to detect loop-like traces on the fly. The experimental results on the GPS datasets of several elders have show that the θ_WD method is effective and efficient in detecting wandering behaviors, in terms of detection performance (AUC > 0.99, and 90% detection rate with less than 5 % of the false alarm rate), as well as time complexity.", "title": "" }, { "docid": "2bf678c98d27501443f0f6fdf35151d7", "text": "The goal of video summarization is to distill a raw video into a more compact form without losing much semantic information. However, previous methods mainly consider the diversity and representation interestingness of the obtained summary, and they seldom pay sufficient attention to semantic information of resulting frame set, especially the long temporal range semantics. To explicitly address this issue, we propose a novel technique which is able to extract the most semantically relevant video segments (i.e., valid for a long term temporal duration) and assemble them into an informative summary. To this end, we develop a semantic attended video summarization network (SASUM) which consists of a frame selector and video descriptor to select an appropriate number of video shots by minimizing the distance between the generated description sentence of the summarized video and the human annotated text of the original video. Extensive experiments show that our method achieves a superior performance gain over previous methods on two benchmark datasets.", "title": "" }, { "docid": "597bfef473a39b5bf2890a2a697e5c26", "text": "Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple. In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system.", "title": "" }, { "docid": "9631926db0052f89abe3b540789ed08e", "text": "DC/DC converters to power future CPU cores mandate low-voltage power metal-oxide semiconductor field-effect transistors (MOSFETs) with ultra low on-resistance and gate charge. Conventional vertical trench MOSFETs cannot meet the challenge. In this paper, we introduce an alternative device solution, the large-area lateral power MOSFET with a unique metal interconnect scheme and a chip-scale package. We have designed and fabricated a family of lateral power MOSFETs including a sub-10 V class power MOSFET with a record-low R/sub DS(ON)/ of 1m/spl Omega/ at a gate voltage of 6V, approximately 50% of the lowest R/sub DS(ON)/ previously reported. The new device has a total gate charge Q/sub g/ of 22nC at 4.5V and a performance figures of merit of less than 30m/spl Omega/-nC, a 3/spl times/ improvement over the state of the art trench MOSFETs. This new MOSFET was used in a 100-W dc/dc converter as the synchronous rectifiers to achieve a 3.5-MHz pulse-width modulation switching frequency, 97%-99% efficiency, and a power density of 970W/in/sup 3/. The new lateral MOSEFT technology offers a viable solution for the next-generation, multimegahertz, high-density dc/dc converters for future CPU cores and many other high-performance power management applications.", "title": "" }, { "docid": "55ca1e978369711765ed4d333313d61a", "text": "Females frequently score higher on standard tests of empathy, social sensitivity, and emotion recognition than do males. It remains to be clarified, however, whether these gender differences are associated with gender specific neural mechanisms of emotional social cognition. We investigated gender differences in an emotion attribution task using functional magnetic resonance imaging. Subjects either focused on their own emotional response to emotion expressing faces (SELF-task) or evaluated the emotional state expressed by the faces (OTHER-task). Behaviorally, females rated SELF-related emotions significantly stronger than males. Across the sexes, SELF- and OTHER-related processing of facial expressions activated a network of medial and lateral prefrontal, temporal, and parietal brain regions involved in emotional perspective taking. During SELF-related processing, females recruited the right inferior frontal cortex and superior temporal sulcus stronger than males. In contrast, there was increased neural activity in the left temporoparietal junction in males (relative to females). When performing the OTHER-task, females showed increased activation of the right inferior frontal cortex while there were no differential activations in males. The data suggest that females recruit areas containing mirror neurons to a higher degree than males during both SELF- and OTHER-related processing in empathic face-to-face interactions. This may underlie facilitated emotional \"contagion\" in females. Together with the observation that males differentially rely on the left temporoparietal junction (an area mediating the distinction between the SELF and OTHERS) the data suggest that females and males rely on different strategies when assessing their own emotions in response to other people.", "title": "" }, { "docid": "7daf4d9d3204cdaf9a1f28a29335802d", "text": "Hole mobility and velocity are extracted from scaled strained-Si0.45Ge0.55 channel p-MOSFETs on insulator. Devices have been fabricated with sub-100-nm gate lengths, demonstrating hole mobility and velocity enhancements in strained- Si0.45Ge0.55 channel devices relative to Si. The effective hole mobility is extracted utilizing the dR/dL method. A hole mobility enhancement is observed relative to Si hole universal mobility for short-channel devices with gate lengths ranging from 65 to 150 nm. Hole velocities extracted using several different methods are compared. The hole velocity of strained-SiGe p-MOSFETs is enhanced over comparable Si control devices. The hole velocity enhancements extracted are on the order of 30%. Ballistic velocity simulations suggest that the addition of (110) uniaxial compressive strain to Si0.45Ge0.55 can result in a more substantial increase in velocity relative to relaxed Si.", "title": "" } ]
scidocsrr
19ed707d952f4078a6a30668bd6ded43
Representing Animations by Principal Components
[ { "docid": "43db0f06e3de405657996b46047fa369", "text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points", "title": "" } ]
[ { "docid": "d99005ab76808d74611bc290442019ec", "text": "Over the last decade, the isoxazoline motif has become the intense focus of crop protection and animal health companies in their search for novel pesticides and ectoparasiticides. Herein we report the discovery of sarolaner, a proprietary, optimized-for-animal health use isoxazoline, for once-a-month oral treatment of flea and tick infestation on dogs.", "title": "" }, { "docid": "2a718f193be63630087bd6c5748b332a", "text": "This study investigates the intrasentential assignment of reference to pronouns (him, her) and anaphors (himself, herself) as characterized by Binding Theory in a subgroup of \"Grammatical specifically language-impaired\" (SLI) children. The study aims to (1) provide further insight into the underlying nature of Grammatical SLI in children and (2) elucidate the relationship between different sources of knowledge, that is, syntactic knowledge versus knowledge of lexical properties and pragmatic inference in the assignment of intrasentential coreference. In two experiments, using a picture-sentence pair judgement task, the children's knowledge of the lexical properties versus syntactic knowledge (Binding Principles A and B) in the assignment of reflexives and pronouns was investigated. The responses of 12 Grammatical SLI children (aged 9:3 to 12:10) and three language ability (LA) control groups of 12 children (aged 5:9 to 9:1) were compared. The results indicated that the SLI children and the LA controls may use a combination of conceptual-lexical and pragmatic knowledge (e.g., semantic gender, reflexive marking of the predicate, and assignment of theta roles) to help assign reference to anaphors and pronouns. The LA controls also showed appropriate use of the syntactic knowledge. In contrast, the SLI children performed at chance when syntactic information was crucially required to rule out inappropriate coreference. The data are consistent with an impairment with the (innate) syntactic knowledge characterized by Binding Theory which underlies reference assignment to anaphors and pronouns. We conclude that the SLI children's syntactic representation is underspecified with respect to coindexation between constituents and the syntactic properties of pronouns. Support is provided for the proposal that Grammatical SLI children have a modular language deficit with syntactic dependent structural relationships between constituents, that is, a Representational Deficit with Dependent Relationships (RDDR). Further consideration of the linguistic characteristics of this deficit is made in relation to the hypothesized syntactic representations of young normally developing children.", "title": "" }, { "docid": "b4b2c5f66c948cbd4c5fbff7f9062f12", "text": "China is taking major steps to improve Beijing’s air quality for the 2008 Olympic Games. However, concentrations of fine particulate matter and ozone in Beijing often exceed healthful levels in the summertime. Based on the US EPA’s Models-3/CMAQ model simulation over the Beijing region, we estimate that about 34% of PM2.5 on average and 35–60% of ozone during high ozone episodes at the Olympic Stadium site can be attributed to sources outside Beijing. Neighboring Hebei and Shandong Provinces and the Tianjin Municipality all exert significant influence on Beijing’s air quality. During sustained wind flow from the south, Hebei Province can contribute 50–70% of Beijing’s PM2.5 concentrations and 20–30% of ozone. Controlling only local sources in Beijing will not be sufficient to attain the air quality goal set for the Beijing Olympics. There is an urgent need for regional air quality management studies and new emission control strategies to ensure that the air quality goals for 2008 are met. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "003be771526441c38f91f96b7ecb802f", "text": "Robotics research and education have gained significant attention in recent years due to increased development and commercial deployment of industrial and service robots. A majority of researchers working on robot grasping and object manipulation tend to utilize commercially available robot-manipulators equipped with various end effectors for experimental studies. However, commercially available robotic grippers are often expensive and are not easy to modify for specific purposes. To extend the choice of robotic end effectors freely available to researchers and educators, we present an open-source low-cost three-finger robotic gripper platform for research and educational purposes. The 3-D design model of the gripper is presented and manufactured with a minimal number of 3-D-printed components and an off-the-shelf servo actuator. An underactuated finger and gear train mechanism, with an overall gripper assembly design, are described in detail, followed by illustrations and a discussion of the gripper grasping performance and possible gripper platform modifications. The presented open-source gripper platform computer-aided design model is released for downloading on the authors research lab website (<;uri xlink:href=\"http://www.alaris.kz\" xlink:type=\"simple\">www.alaris.kz<;/uri>) and can be utilized by robotics researchers and educators as a design platform to build their own robotic end effector solutions for research and educational purposes.", "title": "" }, { "docid": "0ddc7bcfd60d56a0d42cd5424d3a1a71", "text": "In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model are in well agreement with the simulations and the experimental verifications from a 500-W prototype.", "title": "" }, { "docid": "32bdd9f720989754744eddb9feedbf32", "text": "Readability depends on many factors ranging from shallow features like word length to semantic ones like coherence. We introduce novel graph-based coherence features based on frequent subgraphs and compare their ability to assess the readability of Wall Street Journal articles. In contrast to Pitler and Nenkova (2008) some of our graph-based features are significantly correlated with human judgments. We outperform Pitler and Nenkova (2008) in the readability ranking task by more than 5% accuracy thus establishing a new state-of-the-art on this dataset.", "title": "" }, { "docid": "a4b56dcf245b5e823ea12695abc61a77", "text": "We study complex Chern-Simons theory on a Seifert manifold M3 by embedding it into string theory. We show that complex Chern-Simons theory on M3 is equivalent to a topologically twisted supersymmetric theory and its partition function can be naturally regularized by turning on a mass parameter. We find that the dimensional reduction of this theory to 2d gives the low energy dynamics of vortices in four-dimensional gauge theory, the fact apparently overlooked in the vortex literature. We also generalize the relations between 1) the Verlinde algebra, 2) quantum cohomology of the Grassmannian, 3) Chern-Simons theory on Σ × S1 and 4) index of a spinc Dirac operator on the moduli space of flat connections to a new set of relations between 1) the “equivariant Verlinde algebra” for a complex group, 2) the equivariant quantum K-theory of vortex moduli spaces, 3) complex Chern-Simons theory on Σ × S1 and 4) the equivariant index of a spinc Dirac operator on the moduli space of Higgs bundles. CALT-TH-2014-171 ar X iv :1 50 1. 01 31 0v 1 [ he pth ] 6 J an 2 01 5", "title": "" }, { "docid": "18e95e39417fcb4dd6e294a1ad8fcfd7", "text": "The paper motivates the need to acquire methodological knowledge for involving children as test users in usability testing. It introduces a methodological framework for delineating comparative assessments of usability testing methods for children participants. This framework consists in three dimensions: (1) assessment criteria for usability testing methods, (2) characteristics describing usability testing methods and, finally, (3) characteristics of children that may impact upon the process and the result of usability testing. Two comparative studies are discussed in the context of this framework along with implications for future research. q 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "4eebd4a2d5c50a2d7de7c36c5296786d", "text": "Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.", "title": "" }, { "docid": "795d4e73b3236a2b968609c39ce8f417", "text": "In this paper, we are introducing an intelligent valet parking management system that guides the cars to autonomously park within a parking lot. The IPLMS for Intelligent Parking Lot Management System, consists of two modules: 1) a model car with a set of micro-controllers and sensors which can scan the environment for suitable parking spot and avoid collision to obstacles, and a Parking Lot Management System (IPLMS) which screens the parking spaces within the parking lot and offers guidelines to the car. The model car has the capability to autonomously maneuver within the parking lot using a fuzzy logic algorithm, and execute parking in the spot determined by the IPLMS, using a parking algorithm. The car receives the instructions from the IPLMS through a wireless communication link. The IPLMS has the flexibility to be adopted by any parking management system, and can potentially save the clients time to look for a parking spot, and/or to stroll from an inaccessible parking space. Moreover, the IPLMS can decrease the financial burden from the parking lot management by offering an easy-to-install system for self-guided valet parking.", "title": "" }, { "docid": "b1c036f2a003ada4eaa965543e7e6d36", "text": "Seaweed and their constituents have been traditionally employed for the management of various human pathologic conditions such as edema, urinary disorders and inflammatory anomalies. The current study was performed to investigate the antioxidant and anti-arthritic effects of fucoidan from Undaria pinnatifida. A noteworthy in vitro antioxidant potential at 500μg/ml in 2, 2-diphenyl-1-picrylhydrazyl scavenging assay (80% inhibition), nitrogen oxide inhibition assay (71.83%), hydroxyl scavenging assay (71.92%), iron chelating assay (73.55%) and a substantial ascorbic acid equivalent reducing power (399.35μg/mg ascorbic acid equivalent) and total antioxidant capacity (402.29μg/mg AAE) suggested fucoidan a good antioxidant agent. Down regulation of COX-2 expression in rabbit articular chondrocytes in a dose (0-100μg) and time (0-48h) dependent manner, unveiled its in vitro anti-inflammatory significance. In vivo carrageenan induced inflammatory rat model demonstrated a 68.19% inhibition of inflammation whereas an inflammation inhibition potential of 79.38% was recorded in anti-arthritic complete Freund's adjuvant-induced arthritic rat model. A substantial ameliorating effect on altered hematological and biochemical parameters in arthritic rats was also observed. Therefore, findings of the present study prospects fucoidan as a potential antioxidant that can effectively abrogate oxidative stress, edema and arthritis-mediated inflammation and mechanistic studies are recommended for observed activities.", "title": "" }, { "docid": "ca007347ba943d279157b21794ac3871", "text": "Multiple-choice items are one of the most commonly used tools for evaluating students' knowledge and skills. A key aspect of this type of assessment is the presence of functioning distractors, i.e., incorrect alternatives intended to be plausible for students with lower achievement. To our knowledge, no work has investigated the relationship between distractor performance and the complexity of the cognitive task required to give the correct answer. The aim of this study was to investigate this relation, employing the first three levels of Bloom's taxonomy (Knowledge, Comprehension, and Application). Specifically, it was hypothesized that items classified into a higher level of Bloom's classification would show a greater number of functioning distractors. The study involved 174 items administered to a sample of 848 undergraduate psychology students during their statistics exam. Each student received 30 items randomly selected from the 174-item pool. The bivariate results mainly supported the authors' hypothesis: the highest percentage of functioning distractors was observed among the items classified into the Application category (η2 = 0.024 and Phi = 0.25 for the dichotomized measure). When the analysis controlled for other item features, it lost statistical significance, partly because of the confounding effect of item difficulty.", "title": "" }, { "docid": "a6fbd3f79105fd5c9edfc4a0292a3729", "text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.", "title": "" }, { "docid": "d9de6a277eec1156e680ee6f656cea10", "text": "Research in the areas of organizational climate and work performance was used to develop a framework for measuring perceptions of safety at work. The framework distinguished perceptions of the work environment from perceptions of performance related to safety. Two studies supported application of the framework to employee perceptions of safety in the workplace. Safety compliance and safety participation were distinguished as separate components of safety-related performance. Perceptions of knowledge about safety and motivation to perform safely influenced individual reports of safety performance and also mediated the link between safety climate and safety performance. Specific dimensions of safety climate were identified and constituted a higher order safety climate factor. The results support conceptualizing safety climate as an antecedent to safety performance in organizations.", "title": "" }, { "docid": "068295e6848b3228d1f25be84c9bf566", "text": "We describe an automated system for the large-scale monitoring of Web sites that serve as online storefronts for spam-advertised goods. Our system is developed from an extensive crawl of black-market Web sites that deal in illegal pharmaceuticals, replica luxury goods, and counterfeit software. The operational goal of the system is to identify the affiliate programs of online merchants behind these Web sites; the system itself is part of a larger effort to improve the tracking and targeting of these affiliate programs. There are two main challenges in this domain. The first is that appearances can be deceiving: Web pages that render very differently are often linked to the same affiliate program of merchants. The second is the difficulty of acquiring training data: the manual labeling of Web pages, though necessary to some degree, is a laborious and time-consuming process. Our approach in this paper is to extract features that reveal when Web pages linked to the same affiliate program share a similar underlying structure. Using these features, which are mined from a small initial seed of labeled data, we are able to profile the Web sites of forty-four distinct affiliate programs that account, collectively, for hundreds of millions of dollars in illicit e-commerce. Our work also highlights several broad challenges that arise in the large-scale, empirical study of malicious activity on the Web.", "title": "" }, { "docid": "72e4984c05e6b68b606775bbf4ce3b33", "text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F , sentences 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.", "title": "" }, { "docid": "dcaa36372cdc34b12ae26875b90c5d56", "text": "This paper presents two different implementations of four Quadrant CMOS Analog Multiplier Circuits. The Multipliers are designed in current mode. Current squarer and translinear loops are the basic blocks for both the structures in realization of mathematical equations. The structures have simplicity in implementation. The proposed multiplier structures are designed in implementing in 180 nm CMOS technology with a supply of 1.8 V & 1.2 V resp. The structures have frequency bandwidth of 493 MHz & 75 MHz with a power consumption of 146.78μW & 36.08μW respectively.", "title": "" }, { "docid": "ac15d2b4d14873235fe6e4d2dfa84061", "text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.", "title": "" }, { "docid": "ba66e377db4ef2b3c626a0a2f19da8c3", "text": "A challenging aspect of scene text recognition is to handle text with distortions or irregular layout. In particular, perspective text and curved text are common in natural scenes and are difficult to recognize. In this work, we introduce ASTER, an end-to-end neural network model that comprises a rectification network and a recognition network. The rectification network adaptively transforms an input image into a new one, rectifying the text in it. It is powered by a flexible Thin-Plate Spline transformation which handles a variety of text irregularities and is trained without human annotations. The recognition network is an attentional sequence-to-sequence model that predicts a character sequence directly from the rectified image. The whole model is trained end to end, requiring only images and their groundtruth text. Through extensive experiments, we verify the effectiveness of the rectification and demonstrate the state-of-the-art recognition performance of ASTER. Furthermore, we demonstrate that ASTER is a powerful component in end-to-end recognition systems, for its ability to enhance the detector.", "title": "" } ]
scidocsrr
7e090c38dc57e611e59262636cd070fc
Electrostatic chuck with a thin ceramic insulation layer for wafer holding
[ { "docid": "00068e4dc90e9bb9f3b8ca7f8e09f679", "text": "In the semiconductor industry, many manufacturing processes, such as CVD or dry etching, are performed in vacuum condition. The electrostatic wafer chuck is the most preferable handling method under such circumstances. It enables retention of a wafer flat and enhanced heat transfer through the whole surface area because the wafer can firmly contact with the chuck. We have investigated the fundamental study of an electrostatic chuck with comb type electrodes and a thin dielectric film. In order to remove the air gap between them, silicone oil is used as a filler to prevent breakdown. The experimental results proved the potential to use the electrostatic chuck for silicon wafer handling. There, however, is a problem which comes from using silicone oil as an insulating filler. The thin dielectric film is easily deformed by tension when the object starts moving. In this report experimental results of the electrostatic wafer chuck are shown when insulating sealant, instead of silicone oil, is used. The electrostatic force acting on the 4 inch silicon wafer is examined with several types of sealant and dielectric films. The electrostatic force increased with the square of the applied voltage for lower voltage and gradually saturated at higher voltage, and the maximum force obtained was approximately 30 N.", "title": "" } ]
[ { "docid": "718a38a546de2dba3233607d7652c94a", "text": "In modern power converter circuits, freewheeling diode snappy recovery phenomenon (voltage snap-off) can ultimately destroy the insulated gate bipolar transistor (IGBT) during turn-on and cause a subsequent circuit failure. In this paper, snappy recovery of modern fast power diodes is investigated with the aid of semiconductor device simulation tools, and experimental test results. The work presented here confirms that the reverse recovery process can by expressed by means of diode capacitive effects which influence the reverse recovery characteristics and determine if the diode exhibits soft or snappy recovery behavior. From the experimental and simulation results, a clear view is obtained for the physical process, causes and device/circuit conditions at which snap-off occurs. The analysis is based on the effect of both device and external operating parameters on the excess minority carrier distributions before and during the reverse recovery transient period.", "title": "" }, { "docid": "df0be45b6db0de70acb6bbf44e7898aa", "text": "The paper focuses on conservation agriculture (CA), defined as minimal soil disturbance (no-till, NT) and permanent soil cover (mulch) combined with rotations, as a more sustainable cultivation system for the future. Cultivation and tillage play an important role in agriculture. The benefits of tillage in agriculture are explored before introducing conservation tillage (CT), a practice that was borne out of the American dust bowl of the 1930s. The paper then describes the benefits of CA, a suggested improvement on CT, where NT, mulch and rotations significantly improve soil properties and other biotic factors. The paper concludes that CA is a more sustainable and environmentally friendly management system for cultivating crops. Case studies from the rice-wheat areas of the Indo-Gangetic Plains of South Asia and the irrigated maize-wheat systems of Northwest Mexico are used to describe how CA practices have been used in these two environments to raise production sustainably and profitably. Benefits in terms of greenhouse gas emissions and their effect on global warming are also discussed. The paper concludes that agriculture in the next decade will have to sustainably produce more food from less land through more efficient use of natural resources and with minimal impact on the environment in order to meet growing population demands. Promoting and adopting CA management systems can help meet this goal.", "title": "" }, { "docid": "170f14fbf337186c8bd9f36390916d2e", "text": "In this paper, we draw upon two sets of theoretical resources to develop a comprehensive theory of sexual offender rehabilitation named the Good Lives Model-Comprehensive (GLM-C). The original Good Lives Model (GLM-O) forms the overarching values and principles guiding clinical practice in the GLM-C. In addition, the latest sexual offender theory (i.e., the Integrated Theory of Sexual Offending; ITSO) provides a clear etiological grounding for these principles. The result is a more substantial and improved rehabilitation model that is able to conceptually link latest etiological theory with clinical practice. Analysis of the GLM-C reveals that it also has the theoretical resources to secure currently used self-regulatory treatment practice within a meaningful structure. D 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dcf4de4629be22628f5b226a1dcee856", "text": "Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as \"walked through\" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.", "title": "" }, { "docid": "a47d9d5ddcd605755eb60d5499ad7f7a", "text": "This paper presents a 14MHz Class-E power amplifier to be used for wireless power transmission. The Class-E power amplifier was built to consider the VSWR and the frequency bandwidth. Tw o kinds of circuits were designed: the high and low quality factor amplifiers. The low quality factor amplifier is confirmed to have larger bandwidth than the high quality factor amplifier. It has also possessed less sensitive characteristics. Therefore, the low quality factor amplifier circuit was adopted and tested. The effect of gate driving input source is studied. The efficiency of the Class-E amplifier reaches 85.5% at 63W.", "title": "" }, { "docid": "a81004b3fc39a66d93811841c6d42ff0", "text": "Failing to properly isolate components in the same address space has resulted in a substantial amount of vulnerabilities. Enforcing the least privilege principle for memory accesses can selectively isolate software components to restrict attack surface and prevent unintended cross-component memory corruption. However, the boundaries and interactions between software components are hard to reason about and existing approaches have failed to stop attackers from exploiting vulnerabilities caused by poor isolation. We present the secure memory views (SMV) model: a practical and efficient model for secure and selective memory isolation in monolithic multithreaded applications. SMV is a third generation privilege separation technique that offers explicit access control of memory and allows concurrent threads within the same process to partially share or fully isolate their memory space in a controlled and parallel manner following application requirements. An evaluation of our prototype in the Linux kernel (TCB < 1,800 LOC) shows negligible runtime performance overhead in real-world applications including Cherokee web server (< 0.69%), Apache httpd web server (< 0.93%), and Mozilla Firefox web browser (< 1.89%) with at most 12 LOC changes.", "title": "" }, { "docid": "5d21e654b54571d2eaf4714b43019ed5", "text": "Data visualization is the process of representing data as pictures to support reasoning about the underlying data. For the interpretation to be as easy as possible, we need to be as close as possible to the original data. As most visualization tools have an internal meta-model, which is different from the one for the presented data, they usually need to duplicate the original data to conform to their meta-model. This leads to an increase in the resources needed, increase which is not always justified. In this work we argue for the need of having an engine that is as close as possible to the data and we present our solution of moving the visualization tool to the data, instead of moving the data to the visualization tool. Our solution also emphasizes the necessity of reusing basic blocks to express complex visualizations and allowing the programmer to script the visualization using his preferred tools, rather than a third party format. As a validation of the expressiveness of our framework, we show how we express several already published visualizations and describe the pros and cons of the approach.", "title": "" }, { "docid": "2582b0fffad677d3f0ecf11b92d9702d", "text": "This study explores teenage girls' narrations of the relationship between self-presentation and peer comparison on social media in the context of beauty. Social media provide new platforms that manifest media and peer influences on teenage girls' understanding of beauty towards an idealized notion. Through 24 in-depth interviews, this study examines secondary school girls' self-presentation and peer comparison behaviors on social network sites where the girls posted self-portrait photographs or “selfies” and collected peer feedback in the forms of “likes,” “followers,” and comments. Results of thematic analysis reveal a gap between teenage girls' self-beliefs and perceived peer standards of beauty. Feelings of low self-esteem and insecurity underpinned their efforts in edited self-presentation and quest for peer recognition. Peers played multiple roles that included imaginary audiences, judges, vicarious learning sources, and comparison targets in shaping teenage girls' perceptions and presentation of beauty. Findings from this study reveal the struggles that teenage girls face today and provide insights for future investigations and interventions pertinent to teenage girls’ presentation and evaluation of self on", "title": "" }, { "docid": "dcf7214c15c13f13d33c9a7b2c216588", "text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.", "title": "" }, { "docid": "bdc614429426f5ad5aeaa73695d58285", "text": "Multibaseline (MB) synthetic aperture radar (SAR) tomography is a promising mode of SAR interferometry, allowing full 3-D imaging of volumetric and layover scatterers in place of a single elevation estimation capability for each SAR cell However, Fourier-based MB SAR tomography is generally affected by unsatisfactory imaging quality due to a typically low number of baselines with irregular distribution. In this paper, we improve the basic elevation focusing technique by reconstructing a set of uniform baselines data exploiting in the interpolation step the ancillary information about the extension of a height sector which contains all the scatterers. This a priori information can be derived from the knowledge of the kind of the observed scenario (e.g., forest or urban). To demonstrate the concept, an imaging enhancement analysis is carried out by simulation.", "title": "" }, { "docid": "6974bf94292b51fc4efd699c28c90003", "text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.", "title": "" }, { "docid": "de1ed7fbb69e5e33e17d1276d265a3e1", "text": "Abnormal glucose metabolism and enhanced oxidative stress accelerate cardiovascular disease, a chronic inflammatory condition causing high morbidity and mortality. Here, we report that in monocytes and macrophages of patients with atherosclerotic coronary artery disease (CAD), overutilization of glucose promotes excessive and prolonged production of the cytokines IL-6 and IL-1β, driving systemic and tissue inflammation. In patient-derived monocytes and macrophages, increased glucose uptake and glycolytic flux fuel the generation of mitochondrial reactive oxygen species, which in turn promote dimerization of the glycolytic enzyme pyruvate kinase M2 (PKM2) and enable its nuclear translocation. Nuclear PKM2 functions as a protein kinase that phosphorylates the transcription factor STAT3, thus boosting IL-6 and IL-1β production. Reducing glycolysis, scavenging superoxide and enforcing PKM2 tetramerization correct the proinflammatory phenotype of CAD macrophages. In essence, PKM2 serves a previously unidentified role as a molecular integrator of metabolic dysfunction, oxidative stress and tissue inflammation and represents a novel therapeutic target in cardiovascular disease.", "title": "" }, { "docid": "c443ca07add67d6fc0c4901e407c68f2", "text": "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.", "title": "" }, { "docid": "eba9ec47b04e08ff2606efa9ffebb6f8", "text": "OBJECTIVE\nThe incidence of neuroleptic malignant syndrome (NMS) is not known, but the frequency of its occurrence with conventional antipsychotic agents has been reported to vary from 0.02% to 2.44%.\n\n\nDATA SOURCES\nMEDLINE search conducted in January 2003 and review of references within the retrieved articles.\n\n\nDATA SYNTHESIS\nOur MEDLINE research yielded 68 cases (21 females and 47 males) of NMS associated with atypical antipsychotic drugs (clozapine, N = 21; risperidone, N = 23; olanzapine, N = 19; and quetiapine, N = 5). The fact that 21 cases of NMS with clozapine were found indicates that low occurrence of extrapyramidal symptoms (EPS) and low EPS-inducing potential do not prevent the occurrence of NMS and D(2) dopamine receptor blocking potential does not have direct correlation with the occurrence of NMS. One of the cardinal features of NMS is an increasing manifestation of EPS, and the conventional antipsychotic drugs are known to produce EPS in 95% or more of NMS cases. With atypical antipsychotic drugs, the incidence of EPS during NMS is of a similar magnitude.\n\n\nCONCLUSIONS\nFor NMS associated with atypical antipsychotic drugs, the mortality rate was lower than that with conventional antipsychotic drugs. However, the mortality rate may simply be a reflection of physicians' awareness and ensuing early treatment.", "title": "" }, { "docid": "91f8e39777636124d449d1f2829f47de", "text": "We propose CAEMSI, a cross-domain analytic evaluation methodology for Style Imitation (SI) systems, based on a set of statistical significance tests that allow hypotheses comparing two corpora to be tested. Typically, SI systems are evaluated using human participants, however, this type of approach has several weaknesses. For humans to provide reliable assessments of an SI system, they must possess a sufficient degree of domain knowledge, which can place significant limitations on the pool of participants. Furthermore, both human bias against computer-generated artifacts, and the variability of participants’ assessments call the reliability of the results into question. Most importantly, the use of human participants places limitations on the number of generated artifacts and SI systems which can be feasibly evaluated. Directly motivated by these shortcomings, CAEMSI provides a robust and scalable approach to the evaluation problem. Normalized Compression Distance, a domain-independent distance metric, is used to measure the distance between individual artifacts within a corpus. The difference between corpora is measured using test statistics derived from these inter-artifact distances, and permutation testing is used to determine the significance of the difference. We provide empirical evidence validating the statistical significance tests, using datasets from two distinct domains.", "title": "" }, { "docid": "0e153353fb8af1511de07c839f6eaca5", "text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.", "title": "" }, { "docid": "3bb97e1573bdec1dd84f38a41d041abd", "text": "This paper presents a case study of the design of the IT Module-Based Test Automation Framework (ITAF). ITAF is designed to allow IT project teams to share a standardized test automation framework or one of its modules within the.NET technology. This framework allows the project teams to easily automate the test cases, improve the efficiency and productivity, and reuse code and resources. The framework is extensible so that users can contribute and add value to it. Each module of the framework represents one typical technology or an application layer. Each module can be decoupled from the framework and used independently to fulfill an automation goal for a specific type of application.", "title": "" }, { "docid": "92da117d31574246744173b339b0d055", "text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.", "title": "" }, { "docid": "4d178a58cfbf0b9441f5b707ae3e7a3f", "text": "Allergic contact cheilitis caused by olaflur in toothpaste Anton de Groot1, Ron Tupker2, Diny Hissink3 and Marjolijn Woutersen4 1Acdegroot Publishing, 8351 HV Wapserveen, The Netherlands, 2Department of Dermatology, Sint Antonius Hospital, 3435 CM Nieuwegein, The Netherlands, 3Consumer and Safety Division, Netherlands Food and Consumer Product Safety Authority, 3511 GG Utrecht, The Netherlands, and 4Centre for Safety of Substances and Products, National Institute for Public Health and the Environment, 3721 MA Bilthoven, The Netherlands", "title": "" }, { "docid": "72dc3957db058654d60b590202aba68a", "text": "Inverted pendulum system is a typical rapid, multivariable, nonlinear, absolute instability and non-minimum phase system, and it is a favorite problem in the field of control theory and application. In its control, the current main control method includes in fuzzy control, variable structure control and robust control etc. For fuzzy control of a double inverted pendulum, the research is focused on how to solve the “rule explosion” problem. The model and characteristics of the system are detailed analyzed; a status fusion function is designed using information fusion. By using it, the output variables of the system with six dimensions is synthesized as two variables: error and variation of error. From the fuzzy control theory, we also design the fuzzy controller of the double inverted pendulum system in MATLAB, and carried out the system simulation in Simulink, results show that the method is feasible.", "title": "" } ]
scidocsrr
eaf9e3e6344f8b1a6f2d4c0bc4babf64
Boosted Generative Models
[ { "docid": "eea39002b723aaa9617c63c1249ef9a6", "text": "Generative Adversarial Networks (GAN) [1] are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.", "title": "" } ]
[ { "docid": "2e41cfe6d9a3fea4db6cf8c9bf973cb9", "text": "A challenge in creating a dataset for machine reading comprehension (MRC) is to collect questions that require a sophisticated understanding of language to answer beyond using superficial cues. In this work, we investigate what makes questions easier across recent 12 MRC datasets with three question styles (answer extraction, description, and multiple choice). We propose to employ simple heuristics to split each dataset into easy and hard subsets and examine the performance of two baseline models for each of the subsets. We then manually annotate questions sampled from each subset with both validity and requisite reasoning skills to investigate which skills explain the difference between easy and hard questions. From this study, we observed that (i) the baseline performances for the hard subsets remarkably degrade compared to those of entire datasets, (ii) hard questions require knowledge inference and multiple-sentence reasoning in comparison with easy questions, and (iii) multiplechoice questions tend to require a broader range of reasoning skills than answer extraction and description questions. These results suggest that one might overestimate recent advances in MRC.", "title": "" }, { "docid": "d319a17ad2fa46e0278e0b0f51832f4b", "text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.", "title": "" }, { "docid": "f0ea74e3a3ab58435d750bd2e476d002", "text": "This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment.", "title": "" }, { "docid": "9cc651b00ea0ca4d9fd551b5e4f1a238", "text": "Sustained physical exercise leads to a reduced capacity to produce voluntary force that typically outlasts the exercise bout. This \"fatigue\" can be due both to impaired muscle function, termed \"peripheral fatigue,\" and a reduction in the capacity of the central nervous system to activate muscles, termed \"central fatigue.\" In this review we consider the factors that determine the recovery of voluntary force generating capacity after various types of exercise. After brief, high-intensity exercise there is typically a rapid restitution of force that is due to recovery of central fatigue (typically within 2 min) and aspects of peripheral fatigue associated with excitation-contraction coupling and reperfusion of muscles (typically within 3-5 min). Complete recovery of muscle function may be incomplete for some hours, however, due to prolonged impairment in intracellular Ca2+ release or sensitivity. After low-intensity exercise of long duration, voluntary force typically shows rapid, partial, recovery within the first few minutes, due largely to recovery of the central, neural component. However, the ability to voluntarily activate muscles may not recover completely within 30 min after exercise. Recovery of peripheral fatigue contributes comparatively little to the fast initial force restitution and is typically incomplete for at least 20-30 min. Work remains to identify what factors underlie the prolonged central fatigue that usually accompanies long-duration single joint and locomotor exercise and to document how the time course of neuromuscular recovery is affected by exercise intensity and duration in locomotor exercise. Such information could be useful to enhance rehabilitation and sports performance.", "title": "" }, { "docid": "114affaf4e25819aafa1c11da26b931f", "text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.", "title": "" }, { "docid": "0473140adacc125cbb8cf4317b0b4c7f", "text": "To improve the usefulness of actuators dedicated to micrometric applications, a compact and accurate positioning mechanism for inner ear drug delivery is proposed in this paper. This serial novel robotic system is able to position a magnetic actuator based on permanent magnets, used as an end-effector of this robotic manipulator for steering magnetic microrobot throughout the cochlea. The serial kinematics is based on two rotations and one translation which axes intersect at the center of the cochlea that is the remote center of rotation (RCM). The conceptual design of this system which takes account of the medical and geometrical constraints is presented in this paper. Direct and inverse kinematics are developed using the classical mathematical tools of serial robotics. This, besides a finite-element analysis of the structure and motors sizing. The stiff and highly compact manipulator allows saving space while guaranteeing the safety of the patient.", "title": "" }, { "docid": "bcda82b5926620060f65506ccbac042f", "text": "This paper investigates spirolaterals for their beauty of form and the unexpected complexity arising from them. From a very simple generative procedure, spirolaterals can be created having great complexity and variation. Using mathematical and computer-based methods, issues of closure, variation, enumeration, and predictictability are discussed. A historical review is also included. The overriding interest in this research is to develop methods and procedures to investigate geometry for the purpose of inspiration for new architectural and sculptural forms. This particular phase will concern the two dimensional representations of spirolaterals.", "title": "" }, { "docid": "42fcc24e20ad15de00eb1f93add8b827", "text": "Although scientometrics is seeing increasing use in Information Systems (IS) research, in particular for evaluating research efforts and measuring scholarly influence; historically, scientometric IS studies are focused primarily on ranking authors, journals, or institutions. Notwithstanding the usefulness of ranking studies for evaluating the productivity of the IS field’s formal communication channels and its scholars, the IS field has yet to exploit the full potential that scientometrics offers, especially towards its progress as a discipline. This study makes a contribution by raising the discourse surrounding the value of scientometric research in IS, and proposes a framework that uncovers the multi-dimensional bases for citation behaviour and its epistemological implications on the creation, transfer, and growth of IS knowledge. Having identified 112 empirical research evaluation studies in IS, we select 44 substantive scientometric IS studies for in-depth content analysis. The findings from this review allow us to map an engaging future in scientometric research, especially towards enhancing the IS field’s conceptual and theoretical development. Journal of Information Technology advance online publication, 12 January 2016; doi:10.1057/jit.2015.29", "title": "" }, { "docid": "af02dd142aa378632a9222ed19c57968", "text": "Commodity CPU architectures, such as ARM and Intel CPUs, have started to offer trusted computing features in their CPUs aimed at displacing dedicated trusted hardware. Unfortunately, these CPU architectures raise serious challenges to building trusted systems because they omit providing secure resources outside the CPU perimeter. This paper shows how to overcome these challenges to build software systems with security guarantees similar to those of dedicated trusted hardware. We present the design and implementation of a firmware-based TPM 2.0 (fTPM) leveraging ARM TrustZone. Our fTPM is the reference implementation of a TPM 2.0 used in millions of mobile devices. We also describe a set of mechanisms needed for the fTPM that can be useful for building more sophisticated trusted applications beyond just a TPM.", "title": "" }, { "docid": "45113e4c563efeacb3ebd62bd7b0643b", "text": "We present AutoConnect, an automatic method that creates customized, 3D-printable connectors attaching two physical objects together. Users simply position and orient virtual models of the two objects that they want to connect and indicate some auxiliary information such as weight and dimensions. Then, AutoConnect creates several alternative designs that users can choose from for 3D printing. The design of the connector is created by combining two holders, one for each object. We categorize the holders into two types. The first type holds standard objects such as pipes and planes. We utilize a database of parameterized mechanical holders and optimize the holder shape based on the grip strength and material consumption. The second type holds free-form objects. These are procedurally generated shell-gripper designs created based on geometric analysis of the object. We illustrate the use of our method by demonstrating many examples of connectors and practical use cases.", "title": "" }, { "docid": "e2b8dd31dad42e82509a8df6cf21df11", "text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.", "title": "" }, { "docid": "b1da294b1d8f270cb2bfe0074231209e", "text": "The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.", "title": "" }, { "docid": "8e878e5083d922d97f8d573c54cbb707", "text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>, Quanzheng Li <Li.Quanzheng@mgh.harvard.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.", "title": "" }, { "docid": "d5d39c867cd1b128c19a7e3aabe0fbfd", "text": "Providing leisure to people with dementia is a serious challenge, for health care professionals and designers and engineers of products used for activity sessions. This article describes the design process of ''the Chitchatters,'' a leisure game for a group of people with dementia in day care centers. The game aims to stimulate social interaction among people with dementia. Different stakeholders, such as older adults with dementia, their relatives and care professionals were involved in the design process via qualitative research methods as participant observation and the use of probes. These methods were applied to give the design team insight into the experiential world of people with dementia. This article presents how design insights from practice and literature can be translated into a real design for a leisure product for group use by older people with dementia, and shows designers how to work with, and design for, special groups.", "title": "" }, { "docid": "8bd7658e27334e52c74b188570edce46", "text": "☆ JH was funded by NERC and a University Royal Socie by an AIB grant awarded to DAP. ⁎ Corresponding author. Department of Anthropology, University Park, PA 16802. Tel.: +1 814 867 0453. E-mail address: dap27@psu.edu (D.A. Puts). 1090-5138/$ – see front matter © 2013 The Authors. P http://dx.doi.org/10.1016/j.evolhumbehav.2013.05.004 Please cite this article as: Hill, A.K., et al., Qu (2013), http://dx.doi.org/10.1016/j.evolhum Article history: Initial receipt 13 March 2013 Final revision received 30 May 2013 Available online xxxx", "title": "" }, { "docid": "6f77e74cd8667b270fae0ccc673b49a5", "text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.", "title": "" }, { "docid": "2033d9c32fb5656c04b8c6f4511baef0", "text": "Graphs are widely used as a natural framework that captures interactions between individual elements represented as nodes in a graph. In medical applications, specifically, nodes can represent individuals within a potentially large population (patients or healthy controls) accompanied by a set of features, while the graph edges incorporate associations between subjects in an intuitive manner. This representation allows to incorporate the wealth of imaging and non-imaging information as well as individual subject features simultaneously in disease classification tasks. Previous graph-based approaches for supervised or unsupervised learning in the context of disease prediction solely focus on pairwise similarities between subjects, disregarding individual characteristics and features, or rather rely on subject-specific imaging feature vectors and fail to model interactions between them. In this paper, we present a thorough evaluation of a generic framework that leverages both imaging and non-imaging information and can be used for brain analysis in large populations. This framework exploits Graph Convolutional Networks (GCNs) and involves representing populations as a sparse graph, where its nodes are associated with imaging-based feature vectors, while phenotypic information is integrated as edge weights. The extensive evaluation explores the effect of each individual component of this framework on disease prediction performance and further compares it to different baselines. The framework performance is tested on two large datasets with diverse underlying data, ABIDE and ADNI, for the prediction of Autism Spectrum Disorder and conversion to Alzheimer's disease, respectively. Our analysis shows that our novel framework can improve over state-of-the-art results on both databases, with 70.4% classification accuracy for ABIDE and 80.0% for ADNI.", "title": "" }, { "docid": "99f62da011921c0ff51daf0c928c865a", "text": "The Health Belief Model, social learning theory (recently relabelled social cognitive theory), self-efficacy, and locus of control have all been applied with varying success to problems of explaining, predicting, and influencing behavior. Yet, there is conceptual confusion among researchers and practitioners about the interrelationships of these theories and variables. This article attempts to show how these explanatory factors may be related, and in so doing, posits a revised explanatory model which incorporates self-efficacy into the Health Belief Model. Specifically, self-efficacy is proposed as a separate independent variable along with the traditional health belief variables of perceived susceptibility, severity, benefits, and barriers. Incentive to behave (health motivation) is also a component of the model. Locus of control is not included explicitly because it is believed to be incorporated within other elements of the model. It is predicted that the new formulation will more fully account for health-related behavior than did earlier formulations, and will suggest more effective behavioral interventions than have hitherto been available to health educators.", "title": "" }, { "docid": "f4abdb10a7c7653e44c90233e06733e7", "text": "Automated spatiotemporal and semantic information extraction for hazards.\" PhD (Doctor of Philosophy) thesis, ii To everyone who has supported and helped me over the years. iii ACKNOWLEDGMENTS It has been a long journey since I started my Ph.D. study at the University of Iowa. During the six years, I received many supports from my adviser, professors, my family and friends. First of all, I would like to give my sincerest thanks to my adviser Dr. Kathleen Stewart for her enormous contributions of advices and time. She brought me to this research field to start the journey. She gave me a lot of inspirations to explore the new path, and kept me on the right track. I also want to thank my committee members Dr. time and supports. Thanks to all professors and colleagues in the Department of Geographical and Sustainability Science at University of Iowa for their help during my study period. I would like to thank my family and friends for all their supports and love during the 6 years for my Ph.D. study. Last but not least, I want to give my best thanks to my husband. He gives me a lot of supports and encouragement throughout my study. Without his supports, I could not achieve to the destination of the journey. Thank you to all of you. iv ABSTRACT This dissertation explores three research topics related to automated spatiotemporal and semantic information extraction about hazard events from Web news reports and other social media. The dissertation makes a unique contribution of bridging geographic information science, geographic information retrieval, and natural language processing. Geographic information retrieval and natural language processing techniques are applied to extract spatiotemporal and semantic information automatically from Web documents, to retrieve information about patterns of hazard events that are not explicitly described in the texts. Chapters 2, 3 and 4 can be regarded as three standalone journal papers. The research topics covered by the three chapters are related to each other, and are presented in a sequential way. Chapter 2 begins with an investigation of methods for automatically extracting spatial and temporal information about hazards from Web news reports. A set of rules is developed to combine the spatial and temporal information contained in the reports based on how this information is presented in text in order to capture the dynamics of hazard events (e.g., changes in event locations, new events occurring) as they …", "title": "" }, { "docid": "a6fd8b8506a933a7cc0530c6ccda03a8", "text": "Native ecosystems are continuously being transformed mostly into agricultural lands. Simultaneously, a large proportion of fields are abandoned after some years of use. Without any intervention, altered landscapes usually show a slow reversion to native ecosystems, or to novel ecosystems. One of the main barriers to vegetation regeneration is poor propagule supply. Many restoration programs have already implemented the use of artificial perches in order to increase seed availability in open areas where bird dispersal is limited by the lack of trees. To evaluate the effectiveness of this practice, we performed a series of meta-analyses comparing the use of artificial perches versus control sites without perches. We found that setting-up artificial perches increases the abundance and richness of seeds that arrive in altered areas surrounding native ecosystems. Moreover, density of seedlings is also higher in open areas with artificial perches than in control sites without perches. Taken together, our results support the use of artificial perches to overcome the problem of poor seed availability in degraded fields, promoting and/or accelerating the restoration of vegetation in concordance with the surrounding landscape.", "title": "" } ]
scidocsrr
b2a7bd25c806c9f6dd66f2b6fa66764d
CyCADA: Cycle-Consistent Adversarial Domain Adaptation
[ { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" }, { "docid": "9bb86141611c54978033e2ea40f05b15", "text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.", "title": "" } ]
[ { "docid": "3bc1a34b361c4356f69d084e0db54b9e", "text": "Predicting program properties such as names or expression types has a wide range of applications. It can ease the task of programming, and increase programmer productivity. A major challenge when learning from programs is how to represent programs in a way that facilitates effective learning. \n We present a general path-based representation for learning from programs. Our representation is purely syntactic and extracted automatically. The main idea is to represent a program using paths in its abstract syntax tree (AST). This allows a learning model to leverage the structured nature of code rather than treating it as a flat sequence of tokens. \n We show that this representation is general and can: (i) cover different prediction tasks, (ii) drive different learning algorithms (for both generative and discriminative models), and (iii) work across different programming languages. \n We evaluate our approach on the tasks of predicting variable names, method names, and full types. We use our representation to drive both CRF-based and word2vec-based learning, for programs of four languages: JavaScript, Java, Python and C#. Our evaluation shows that our approach obtains better results than task-specific handcrafted representations across different tasks and programming languages.", "title": "" }, { "docid": "44050ba52838a583e2efb723b10f0234", "text": "This paper presents a novel approach to the reconstruction of geometric models and surfaces from given sets of points using volume splines. It results in the representation of a solid by the inequality The volume spline is based on use of the Green’s function for interpolation of scalar function values of a chosen “carrier” solid. Our algorithm is capable of generating highly concave and branching objects automatically. The particular case where the surface is reconstructed from cross-sections is discussed too. Potential applications of this algorithm are in tomography, image processing, animation and CAD f o r bodies with complex surfaces.", "title": "" }, { "docid": "777d4e55f3f0bbb0544130931006b237", "text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.", "title": "" }, { "docid": "c0890c01e51ddedf881cd3d110efa6e2", "text": "A residual networks family with hundreds or even thousands of layers dominates major image recognition tasks, but building a network by simply stacking residual blocks inevitably limits its optimization ability. This paper proposes a novel residual network architecture, residual networks of residual networks (RoR), to dig the optimization ability of residual networks. RoR substitutes optimizing residual mapping of residual mapping for optimizing original residual mapping. In particular, RoR adds levelwise shortcut connections upon original residual networks to promote the learning capability of residual networks. More importantly, RoR can be applied to various kinds of residual networks (ResNets, Pre-ResNets, and WRN) and significantly boost their performance. Our experiments demonstrate the effectiveness and versatility of RoR, where it achieves the best performance in all residual-network-like structures. Our RoR-3-WRN58-4 + SD models achieve new state-of-the-art results on CIFAR-10, CIFAR-100, and SVHN, with the test errors of 3.77%, 19.73%, and 1.59%, respectively. RoR-3 models also achieve state-of-the-art results compared with ResNets on the ImageNet data set.", "title": "" }, { "docid": "30a57dc7d69b302219e05918b874d2b2", "text": "In recent years, flight delay problem blocks the development of the civil aviation industry all over the world. And delay propagation always is a main factor that impacts the flight's delay. All kinds of delays often happen in nearly-saturated or overloaded airports. This paper we take one busy hub-airport as the main research object to estimate the arrival delay in this airport, and to discuss the influence of propagation within and from this airport. First, a delay propagation model is described qualitatively in mathematics after sorting and analyzing the relationships between all flights, especially focused on the frequently type, named aircraft correlation. Second, an arrival delay model is established based on Bayesian network. By training the model, the arrival delay in this airport can be estimated. Third, after clarifying the arrival status of one airport, the impact from propagation of arrival delays within and from this busy airport is discussed, especially between the flights belonging to one same air company. All the data used in our experiments is come from real records, for the industry secret, the name of the airport and the air company is hidden.", "title": "" }, { "docid": "37a4b2d15a29132efa362b4de8f259fc", "text": "Without the need of any transmission line, a very compact decoupling network based on reactive lumped elements is presented for a two-element closely spaced array. The lumped network, consisting of two series and four shunt elements, can be analytically designed using the even-odd mode analysis. In the even mode, the half-circuit of the decoupling network is identical to an L-section matching network, while in the odd mode it is equivalent to a π-section one. The proposed decoupling network can deal with the matching conditions of the even and odd modes independently so as to simultaneously achieve good impedance matching and port isolation of the whole antenna array. The design principle, formulation, and experimental results including the radiation characteristics are introduced.", "title": "" }, { "docid": "4706560ae6318724e6eb487d23804a76", "text": "Schizophrenia is a complex neurodevelopmental disorder characterized by cognitive deficits. These deficits in cognitive functioning have been shown to relate to a variety of functional and treatment outcomes. Cognitive adaptation training (CAT) is a home-based, manual-driven treatment that utilizes environmental supports and compensatory strategies to bypass cognitive deficits and improve target behaviors and functional outcomes in individuals with schizophrenia. Unlike traditional case management, CAT provides environmental supports and compensatory strategies tailored to meet the behavioral style and neurocognitive deficits of each individual patient. The case of Ms. L. is presented to illustrate CAT treatment.", "title": "" }, { "docid": "cdf78bab8d93eda7ccbb41674d24b1a2", "text": "OBJECTIVE\nThe U.S. Food and Drug Administration and Institute of Medicine are currently investigating front-of-package (FOP) food labelling systems to provide science-based guidance to the food industry. The present paper reviews the literature on FOP labelling and supermarket shelf-labelling systems published or under review by February 2011 to inform current investigations and identify areas of future research.\n\n\nDESIGN\nA structured search was undertaken of research studies on consumer use, understanding of, preference for, perception of and behaviours relating to FOP/shelf labelling published between January 2004 and February 2011.\n\n\nRESULTS\nTwenty-eight studies from a structured search met inclusion criteria. Reviewed studies examined consumer preferences, understanding and use of different labelling systems as well as label impact on purchasing patterns and industry product reformulation.\n\n\nCONCLUSIONS\nThe findings indicate that the Multiple Traffic Light system has most consistently helped consumers identify healthier products; however, additional research on different labelling systems' abilities to influence consumer behaviour is needed.", "title": "" }, { "docid": "458dacc4d32c5a80bd88b88bf537e50e", "text": "The aim of the study is to investigate the spiritual intelligence role in predicting Quchan University students’ quality of life. In order to collect data, a sample of 143 students of Quechan University was selected randomly enrolled for 89–90 academic year. The instruments of the data collecting are World Health Organization Quality of Life (WHOQOL) and Spiritual Intelligence Questionnaire. For analyzing the data, the standard deviation, and Pearson’s correlation coefficient in descriptive level, and in inferential level, the regression test was used. The results of the study show that the spiritual intelligence has effective role on predicting quality of life.", "title": "" }, { "docid": "c3f943da2d68ee7980972a77c685fde6", "text": "*Correspondence: pwitbooi@uwc.ac.za Department of Mathematics and Applied Mathematics, University of the Western Cape, Private Bag X17, Bellville, 7535, Republic of South Africa Abstract Antiretroviral treatment (ART) and oral pre-exposure prophylaxis (PrEP) have recently been used efficiently in management of HIV infection. Pre-exposure prophylaxis consists in the use of an antiretroviral medication to prevent the acquisition of HIV infection by uninfected individuals. We propose a new model for the transmission of HIV/AIDS including ART and PrEP. Our model can be used to test the effects of ART and of the uptake of PrEP in a given population, as we demonstrate through simulations. The model can also be used to estimate future projections of HIV prevalence. We prove global stability of the disease-free equilibrium. We also prove global stability of the endemic equilibrium for the most general case of the model, i.e., which allows for PrEP individuals to default. We include insightful simulations based on recently published South-African data.", "title": "" }, { "docid": "fb37da1dc9d95501e08d0a29623acdab", "text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.", "title": "" }, { "docid": "39188ae46f22dd183f356ba78528b720", "text": "Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this paper we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalisation, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyse the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better capitalised banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tiering) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out.", "title": "" }, { "docid": "c61e25e5896ff588764639b6a4c18d2e", "text": "Social media is continually emerging as a platform of information exchange around health challenges. We study mental health discourse on the popular social media: reddit. Building on findings about health information seeking and sharing practices in online forums, and social media like Twitter, we address three research challenges. First, we present a characterization of self-disclosure in mental illness communities on reddit. We observe individuals discussing a variety of concerns ranging from the daily grind to specific queries about diagnosis and treatment. Second, we build a statistical model to examine the factors that drive social support on mental health reddit communities. We also develop language models to characterize mental health social support, which are observed to bear emotional, informational, instrumental, and prescriptive information. Finally, we study disinhibition in the light of the dissociative anonymity that reddit’s throwaway accounts provide. Apart from promoting open conversations, such anonymity surprisingly is found to gather feedback that is more involving and emotionally engaging. Our findings reveal, for the first time, the kind of unique information needs that a social media like reddit might be fulfilling when it comes to a stigmatic illness. They also expand our understanding of the role of the social web in behavioral therapy.", "title": "" }, { "docid": "72f59a5342e3dc9d9c038fae8b9d4844", "text": "Borromean rings or links are topologically complex assemblies of three entangled rings where no two rings are interlinked in a chain-like catenane, yet the three rings cannot be separated. We report here a metallacycle complex whose crystalline network forms the first example of a new class of entanglement. The complex is formed from the self-assembly of CuBr2 with the cyclotriveratrylene-scaffold ligand (±)-tris(iso-nicotinoyl)cyclotriguaiacylene. Individual metallacycles are interwoven into a two-dimensional chainmail network where each metallacycle exhibits multiple Borromean-ring-like associations with its neighbours. This only occurs in the solid state, and also represents the first example of a crystalline infinite chainmail two-dimensional network. Crystals of the complex were twinned and have an unusual hollow tubular morphology that is likely to result from a localized dissolution-recrystallization process.", "title": "" }, { "docid": "ec69b95261fc19183a43c0e102f39016", "text": "The selection of a surgical approach for the treatment of tibia plateau fractures is an important decision. Approximately 7% of all tibia plateau fractures affect the posterolateral corner. Displaced posterolateral tibia plateau fractures require anatomic articular reduction and buttress plate fixation on the posterior aspect. These aims are difficult to reach through a lateral or anterolateral approach. The standard posterolateral approach with fibula osteotomy and release of the posterolateral corner is a traumatic procedure, which includes the risk of fragment denudation. Isolated posterior approaches do not allow sufficient visual control of fracture reduction, especially if the fracture is complex. Therefore, the aim of this work was to present a surgical approach for posterolateral tibial plateau fractures that both protects the soft tissue and allows for good visual control of fracture reduction. The approach involves a lateral arthrotomy for visualizing the joint surface and a posterolateral approach for the fracture reduction and plate fixation, which are both achieved through one posterolateral skin incision. Using this approach, we achieved reduction of the articular surface and stable fixation in six of seven patients at the final follow-up visit. No complications and no loss of reduction were observed. Additionally, the new posterolateral approach permits direct visual exposure and facilitates the application of a buttress plate. Our approach does not require fibular osteotomy, and fragments of the posterolateral corner do not have to be detached from the soft tissue network.", "title": "" }, { "docid": "8e28f1561b3a362b2892d7afa8f2164c", "text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.", "title": "" }, { "docid": "a29a61f5ad2e4b44e8e3d11b471a0f06", "text": "To ascertain by MRI the presence of filler injected into facial soft tissue and characterize complications by contrast enhancement. Nineteen volunteers without complications were initially investigated to study the MRI features of facial fillers. We then studied another 26 patients with clinically diagnosed filler-related complications using contrast-enhanced MRI. TSE-T1-weighted, TSE-T2-weighted, fat-saturated TSE-T2-weighted, and TIRM axial and coronal scans were performed in all patients, and contrast-enhanced fat-suppressed TSE-T1-weighted scans were performed in complicated patients, who were then treated with antibiotics. Patients with soft-tissue enhancement and those without enhancement but who did not respond to therapy underwent skin biopsy. Fisher’s exact test was used for statistical analysis. MRI identified and quantified the extent of fillers. Contrast enhancement was detected in 9/26 patients, and skin biopsy consistently showed inflammatory granulomatous reaction, whereas in 5/17 patients without contrast enhancement, biopsy showed no granulomas. Fisher’s exact test showed significant correlation (p < 0.001) between subcutaneous contrast enhancement and granulomatous reaction. Cervical lymph node enlargement (longitudinal axis >10 mm) was found in 16 complicated patients (65 %; levels IA/IB/IIA/IIB). MRI is a useful non-invasive tool for anatomical localization of facial dermal filler; IV gadolinium administration is advised in complicated cases for characterization of granulomatous reaction. • MRI is a non-invasive tool for facial dermal filler detection and localization. • MRI-criteria to evaluate complicated/non-complicated cases after facial dermal filler injections are defined. • Contrast-enhanced MRI detects subcutaneous inflammatory granulomatous reaction due to dermal filler. • 65 % patients with filler-related complications showed lymph-node enlargement versus 31.5 % without complications. • Lymph node enlargement involved cervical levels (IA/IB/IIA/IIB) that drained treated facial areas.", "title": "" }, { "docid": "6821d4c1114e007453578dd90600db15", "text": "Our goal is to assess the strategic and operational benefits of electronic integration for industrial procurement. We conduct a field study with an industrial supplier and examine the drivers of performance of the procurement process. Our research quantifies both the operational and strategic impacts of electronic integration in a B2B procurement environment for a supplier. Additionally, we show that the customer also obtains substantial benefits from efficient procurement transaction processing. We isolate the performance impact of technology choice and ordering processes on both the trading partners. A significant finding is that the supplier derives large strategic benefits when the customer initiates the system and the supplier enhances the system’s capabilities. With respect to operational benefits, we find that when suppliers have advanced electronic linkages, the order-processing system significantly increases benefits to both parties. (Business Value of IT; Empirical Assessment; Electronic Integration; Electronic Procurement; B2B; Strategic IT Impact; Operational IT Impact)", "title": "" }, { "docid": "b9ef363fc7563dd14b3a4fd781d76d91", "text": "Deep learning (DL)-based Reynolds stress with its capability to leverage values of large data can be used to close Reynolds-averaged Navier-Stoke (RANS) equations. Type I and Type II machine learning (ML) frameworks are studied to investigate data and flow feature requirements while training DL-based Reynolds stress. The paper presents a method, flow features coverage mapping (FFCM), to quantify the physics coverage of DL-based closures that can be used to examine the sufficiency of training data points as well as input flow features for data-driven turbulence models. Three case studies are formulated to demonstrate the properties of Type I and Type II ML. The first case indicates that errors of RANS equations with DL-based Reynolds stress by Type I ML are accumulated along with the simulation time when training data do not sufficiently cover transient details. The second case uses Type I ML to show that DL can figure out time history of flow transients from data sampled at various times. The case study also shows that the necessary and sufficient flow features of DL-based closures are first-order spatial derivatives of velocity fields. The last case demonstrates the limitation of Type II ML for unsteady flow simulation. Type II ML requires initial conditions to be sufficiently close to reference data. Then reference data can be used to improve RANS simulation.", "title": "" }, { "docid": "692b11d9502fa7f9fc299e1a9addbfb3", "text": "This paper presents the first version of the NIST Cloud Computing Reference Architecture (RA). This is a vendor neutral conceptual model that concentrates on the role and interactions of the identified actors in the cloud computing sphere. Five primary actors were identified Cloud Service Consumer, Cloud Service Provider, Cloud Broker, Cloud Auditor and Cloud Carrier. Their roles and activities are discussed in this report. A primary goal for generating this model was to give the United States Government (USG) a method for understanding and communicating the components of a cloud computing system for Federal IT executives, Program Managers and IT procurement officials. Keywords-component; cloud computing, reference architecture, Federal Government", "title": "" } ]
scidocsrr