query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
11ea8aa4b735d9f5f53a9c893a4d16ba
Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges
[ { "docid": "a38fe2a01aa7894a0b11e70841543332", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Geographic data and tools are essential in all aspects of emergency management: preparedness, response, recovery, and mitigation. Geographic information created by amateur citizens, often known as volunteered geographic information, has recently provided an interesting alternative to traditional authoritative information from mapping agencies and corporations, and several recent papers have provided the beginnings of a literature on the more fundamental issues raised by this new source. Data quality is a major concern, since volunteered information is asserted and carries none of the assurances that lead to trust in officially created data. During emergencies time is the essence, and the risks associated with volunteered information are often outweighed by the benefits of its use. An example is discussed using the four wildfires that impacted the Santa Barbara area in 2007Á2009, and lessons are drawn. 1. Introduction Recent disasters have drawn attention to the vulnerability of human populations and infrastructure, and the extremely high cost of recovering from the damage they have caused. In all of these cases impacts were severe, in damage, injury, and loss of life, and were spread over large areas. In all of these cases modern technology has brought reports and images to the almost immediate attention of much of the world's population, and in the Katrina case it was possible for millions around the world to watch the events as they unfolded in near-real time. Images captured from satellites have been used to create damage assessments, and digital maps have been used to direct supplies and to guide the recovery effort, in an increasingly important application of Digital Earth. Nevertheless it has been clear in all of these cases that the potential of such data, and of geospatial data and tools more generally, …", "title": "" }, { "docid": "55b405991dc250cd56be709d53166dca", "text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.", "title": "" } ]
[ { "docid": "e9e5252319b5c62ba18628abc53a727b", "text": "This paper proposes a robust and fast scheme to detect moving objects in a non-stationary camera. The state-of-the art methods still do not give a satisfactory performance due to drastic frame changes in a non-stationary camera. To improve the robustness in performance, we additionally use the spatio-temporal properties of moving objects. We build the foreground probability map which reflects the spatio-temporal properties, then we selectively apply the detection procedure and update the background model only to the selected pixels using the foreground probability. The foreground probability is also used to refine the initial detection results to obtain a clear foreground region. We compare our scheme quantitatively and qualitatively to the state-of-the-art methods in the detection quality and speed. The experimental results show that our scheme outperforms all other compared methods.", "title": "" }, { "docid": "72f6f6484499ccaa0188d2a795daa74c", "text": "Road detection is one of the most important research areas in driver assistance and automated driving field. However, the performance of existing methods is still unsatisfactory, especially in severe shadow conditions. To overcome those difficulties, first we propose a novel shadow-free feature extractor based on the color distribution of road surface pixels. Then we present a road detection framework based on the extractor, whose performance is more accurate and robust than that of existing extractors. Also, the proposed framework has much low-complexity, which is suitable for usage in practical systems.", "title": "" }, { "docid": "00ae7d925a12b1f35f33213af08c82c9", "text": "Graph-based approaches have been successful in unsupervised and semi-supervised learning. In this paper, we focus on the real-world applications where the same instance can be represented by multiple heterogeneous features. The key point of utilizing the graph-based knowledge to deal with this kind of data is to reasonably integrate the different representations and obtain the most consistent manifold with the real data distributions. In this paper, we propose a novel framework via the reformulation of the standard spectral learning model, which can be used for multiview clustering and semisupervised tasks. Unlike other methods in the literature, the proposed methods can learn an optimal weight for each graph automatically without introducing an additive parameter as previous methods do. Furthermore, our objective under semisupervised learning is convex and the global optimal result will be obtained. Extensive empirical results on different real-world data sets demonstrate that the proposed methods achieve comparable performance with the state-of-the-art approaches and can be used more practically.", "title": "" }, { "docid": "278e83a20dc4f34df316ff408232cdf8", "text": "We present a Multi View Stereo approach for huge unstructured image datasets that can deal with large variations in surface sampling rate of single images. Our method reconstructs surface parts always in the best available resolution. It considers scaling not only for large scale differences, but also between arbitrary small ones for a weighted merging of the best partial reconstructions. We create depth maps with our GPU based depth map algorithm, that also performs normal optimization. It matches several images that are found with a heuristic image selection method, to a reference image. We remove outliers by comparing depth maps against each other with a fast but reliable GPU approach. Then, we merge the different reconstructions from depth maps in 3D space by selecting the best points and optimizing them with not selected points. Finally, we create the surface by using a Delaunay graph cut.", "title": "" }, { "docid": "2168edeee6171ef9df18f74f9b1d2c47", "text": "We present a novel high-level parallel programming model aimed at graphics processing units (GPUs). We embed GPU kernels as data-parallel array computations in the purely functional language Haskell. GPU and CPU computations can be freely interleaved with the type system tracking the two different modes of computation. The embedded language of array computations is sufficiently limited that our system can automatically extract these computations and compile them to efficient GPU code. In this paper, we outline our approach and present the results of a few preliminary benchmarks.", "title": "" }, { "docid": "1b7efa9ffda9aa23187ae7028ea5d966", "text": "Tools for clinical assessment and escalation of observation and treatment are insufficiently established in the newborn population. We aimed to provide an overview over early warning- and track and trigger systems for newborn infants and performed a nonsystematic review based on a search in Medline and Cinahl until November 2015. Search terms included 'infant, newborn', 'early warning score', and 'track and trigger'. Experts in the field were contacted for identification of unpublished systems. Outcome measures included reference values for physiological parameters including respiratory rate and heart rate, and ways of quantifying the extent of deviations from the reference. Only four neonatal early warning scores were published in full detail, and one system for infants with cardiac disease was considered as having a more general applicability. Temperature, respiratory rate, heart rate, SpO2, capillary refill time, and level of consciousness were parameters commonly included, but the definition and quantification of 'abnormal' varied slightly. The available scoring systems were designed for term and near-term infants in postpartum wards, not neonatal intensive care units. In conclusion, there is a limited availability of neonatal early warning scores. Scoring systems for high-risk neonates in neonatal intensive care units and preterm infants were not identified.", "title": "" }, { "docid": "3e409a01cfc02c0b89bae310c3f693fe", "text": "The last ten years have seen an increasing interest, within cognitive science, in issues concerning the physical body, the local environment, and the complex interplay between neural systems and the wider world in which they function. Yet many unanswered questions remain, and the shape of a genuinely physically embodied, environmentally embedded science of the mind is still unclear. In this article I will raise a number of critical questions concerning the nature and scope of this approach, drawing a distinction between two kinds of appeal to embodiment: (1) 'Simple' cases, in which bodily and environmental properties merely constrain accounts that retain the focus on inner organization and processing; and (2) More radical appeals, in which attention to bodily and environmental features is meant to transform both the subject matter and the theoretical framework of cognitive science.", "title": "" }, { "docid": "3baf8d673b5ecf130cf770019aaa3e3c", "text": "Fuzzy logic may be considered as an assortment of decision making techniques. In many applications like process control, the algorithm’s outcome is ruled by a number of key decisions which are made in the algorithm. Defining the best decision requires extensive knowledge of the system. When experience or understanding of the problem is not available, optimising the algorithm becomes very difficult. This is the reason why fuzzy logic is useful.", "title": "" }, { "docid": "fd35019f37ea3b05b7b6a14bf74d5ad1", "text": "Given the tremendous growth of sport fans, the “Intelligent Arena”, which can greatly improve the fun of traditional sports, becomes one of the new-emerging applications and research topics. The development of multimedia computing and artificial intelligence technologies support intelligent sport video analysis to add live video broadcast, score detection, highlight video generation, and online sharing functions to the intelligent arena applications. In this paper, we have proposed a deep learning based video analysis scheme for intelligent basketball arena applications. First of all, with multiple cameras or mobile devices capturing the activities in arena, the proposed scheme can automatically select the camera to give high-quality broadcast in real-time. Furthermore, with basketball energy image based deep conventional neural network, we can detect the scoring clips as the highlight video reels to support the wonderful actions replay and online sharing functions. Finally, evaluations on a built real-world basketball match dataset demonstrate that the proposed system can obtain 94.59% accuracy with only less than 45m s processing time (i.e., 10m s broadcast camera selection, and 35m s for scoring detection) for each frame. As the outstanding performance, the proposed deep learning based basketball video analysis scheme is implemented into a commercial intelligent basketball arena application named “Standz Basketball”. Although the application had been only released for one month, it achieves the 85t h day download ranking place in the sport category of Chinese iTunes market.", "title": "" }, { "docid": "2258a0ba739557d489a796f050fad3e0", "text": "The term fractional calculus is more than 300 years old. It is a generalization of the ordinary differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and goes back to times when Leibniz, Gauss, and Newton invented this kind of calculation. In a letter to L’Hospital in 1695 Leibniz raised the following question (Miller and Ross, 1993): “Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders?\" The story goes that L’Hospital was somewhat curious about that question and replied by another question to Leibniz. “What if the order will be 1/2?\" Leibniz in a letter dated September 30, 1695 replied: “It will lead to a paradox, from which one day useful consequences will be drawn.\" The question raised by Leibniz for a fractional derivative was an ongoing topic in the last 300 years. Several mathematicians contributed to this subject over the years. People like Liouville, Riemann, and Weyl made major contributions to the theory of fractional calculus. The story of the fractional calculus continued with contributions from Fourier, Abel, Leibniz, Grünwald, and Letnikov. Nowadays, the fractional calculus attracts many scientists and engineers. There are several applications of this mathematical phenomenon in mechanics, physics, chemistry, control theory and so on (Caponetto et al., 2010; Magin, 2006; Monje et al., 2010; Oldham and Spanier, 1974; Oustaloup, 1995; Podlubny, 1999). It is natural that many authors tried to solve the fractional derivatives, fractional integrals and fractional differential equations in Matlab. A few very good and interesting Matlab functions were already submitted to the MathWorks, Inc. Matlab Central File Exchange, where they are freely downloadable for sharing among the users. In this chapter we will use some of them. It is worth mentioning some addition to Matlab toolboxes, which are appropriate for the solution of fractional calculus problems. One of them is a toolbox created by CRONE team (CRONE, 2010) and another one is the Fractional State–Space Toolkit developed by Dominik Sierociuk (Sierociuk, 2005). Last but not least we should also mention a Matlab toolbox created by Dingyü Xue (Xue, 2010), which is based on Matlab object for fractional-order transfer function and some manipulation with this class of the transfer function. Despite that the mentioned toolboxes are mainly for control systems, they can be “abused\" for solutions of general problems related to fractional calculus as well. 10", "title": "" }, { "docid": "5915dd433e50ae74ebcfe50229b27e58", "text": "Ultrasound imaging of thyroid gland provides the ability to acquire valuable information for medical diagnosis. This study presents a novel scheme for the analysis of longitudinal ultrasound images aiming at efficient and effective computer-aided detection of thyroid nodules. The proposed scheme involves two phases: a) application of a novel algorithm for the detection of the boundaries of the thyroid gland and b) detection of thyroid nodules via classification of Local Binary Pattern feature vectors extracted only from the area between the thyroid boundaries. Extensive experiments were performed on a set of B-mode thyroid ultrasound images. The results show that the proposed scheme is a faster and more accurate alternative for thyroid ultrasound image analysis than the conventional, exhaustive feature extraction and classification scheme.", "title": "" }, { "docid": "cd64e9677f921f6602197ba809d106f4", "text": "The global pandemic of physical inactivity requires a multisectoral, multidisciplinary public-health response. Scaling up interventions that are capable of increasing levels of physical activity in populations across the varying cultural, geographic, social, and economic contexts worldwide is challenging, but feasible. In this paper, we review the factors that could help to achieve this. We use a mixed-methods approach to comprehensively examine these factors, drawing on the best available evidence from both evidence-to-practice and practice-to-evidence methods. Policies to support active living across society are needed, particularly outside the health-care sector, as demonstrated by some of the successful examples of scale up identified in this paper. Researchers, research funders, and practitioners and policymakers in culture, education, health, leisure, planning, and transport, and civil society as a whole, all have a role. We should embrace the challenge of taking action to a higher level, aligning physical activity and health objectives with broader social, environmental, and sustainable development goals.", "title": "" }, { "docid": "a9fae3b86b21e40e71b99e5374cd3d4d", "text": "Motor vehicle collisions are an important cause of blunt abdominal trauma in pregnant woman. Among the possible outcomes of blunt abdominal trauma, placental abruption, direct fetal trauma, and rupture of the gravid uterus are described. An interesting case of complete fetal decapitation with uterine rupture due to a high-velocity motor vehicle collision is described. The external examination of the fetus showed a disconnection between the cervical vertebrae C3 and C4. The autopsy examination showed hematic infiltration of the epicranic soft tissues, an overlap of the parietal bones, and a subarachnoid hemorrhage in the posterior part of interparietal area. Histological analysis was carried out showing a lack of epithelium and hemorrhages in the subcutaneous tissue, a hematic infiltration between the muscular fibers of the neck and between the collagen and deep muscular fibers of the tracheal wall. Specimens collected from the placenta and from the uterus showed a hematic infiltration with hypotrophy of the placental villi, fibrosis of the mesenchymal villi with ischemic phenomena of the membrane. The convergence of circumstantial data, autopsy results, and histological data led us to conclude that the neck lesion was vital and the cause of death was attributed to the motor vehicle collision.", "title": "" }, { "docid": "fdb9da0c4b6225c69de16411c79ac9dc", "text": "Phylogenetic analyses reveal the evolutionary derivation of species. A phylogenetic tree can be inferred from multiple sequence alignments of proteins or genes. The alignment of whole genome sequences of higher eukaryotes is a computational intensive and ambitious task as is the computation of phylogenetic trees based on these alignments. To overcome these limitations, we here used an alignment-free method to compare genomes of the Brassicales clade. For each nucleotide sequence a Chaos Game Representation (CGR) can be computed, which represents each nucleotide of the sequence as a point in a square defined by the four nucleotides as vertices. Each CGR is therefore a unique fingerprint of the underlying sequence. If the CGRs are divided by grid lines each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence (Frequency Chaos Game Representation, FCGR). Here, we used distance measures between FCGRs to infer phylogenetic trees of Brassicales species. Three types of data were analyzed because of their different characteristics: (A) Whole genome assemblies as far as available for species belonging to the Malvidae taxon. (B) EST data of species of the Brassicales clade. (C) Mitochondrial genomes of the Rosids branch, a supergroup of the Malvidae. The trees reconstructed based on the Euclidean distance method are in general agreement with single gene trees. The Fitch-Margoliash and Neighbor joining algorithms resulted in similar to identical trees. Here, for the first time we have applied the bootstrap re-sampling concept to trees based on FCGRs to determine the support of the branchings. FCGRs have the advantage that they are fast to calculate, and can be used as additional information to alignment based data and morphological characteristics to improve the phylogenetic classification of species in ambiguous cases.", "title": "" }, { "docid": "7df44111db2208ef80b210cfa6350de7", "text": "OBJECTIVE\nIndependently of total caloric intake, a better quality of the diet (for example, conformity to the Mediterranean diet) is associated with lower obesity risk. It is unclear whether a brief dietary assessment tool, instead of full-length comprehensive methods, can also capture this association. In addition to reduced costs, a brief tool has the interesting advantage of allowing immediate feedback to participants in interventional studies. Another relevant question is which individual items of such a brief tool are responsible for this association. We examined these associations using a 14-item tool of adherence to the Mediterranean diet as exposure and body mass index, waist circumference and waist-to-height ratio (WHtR) as outcomes.\n\n\nDESIGN\nCross-sectional assessment of all participants in the \"PREvención con DIeta MEDiterránea\" (PREDIMED) trial.\n\n\nSUBJECTS\n7,447 participants (55-80 years, 57% women) free of cardiovascular disease, but with either type 2 diabetes or ≥ 3 cardiovascular risk factors. Trained dietitians used both a validated 14-item questionnaire and a full-length validated 137-item food frequency questionnaire to assess dietary habits. Trained nurses measured weight, height and waist circumference.\n\n\nRESULTS\nStrong inverse linear associations between the 14-item tool and all adiposity indexes were found. For a two-point increment in the 14-item score, the multivariable-adjusted differences in WHtR were -0.0066 (95% confidence interval, -0.0088 to -0.0049) for women and -0.0059 (-0.0079 to -0.0038) for men. The multivariable-adjusted odds ratio for a WHtR>0.6 in participants scoring ≥ 10 points versus ≤ 7 points was 0.68 (0.57 to 0.80) for women and 0.66 (0.54 to 0.80) for men. High consumption of nuts and low consumption of sweetened/carbonated beverages presented the strongest inverse associations with abdominal obesity.\n\n\nCONCLUSIONS\nA brief 14-item tool was able to capture a strong monotonic inverse association between adherence to a good quality dietary pattern (Mediterranean diet) and obesity indexes in a population of adults at high cardiovascular risk.", "title": "" }, { "docid": "c1b8beec6f2cb42b5a784630512525f3", "text": "Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.", "title": "" }, { "docid": "5dda88503ed3562408a6721ba83aa832", "text": "The goal of supervised feature selection is to find a subset of input features that are responsible for predicting output values. The least absolute shrinkage and selection operator (Lasso) allows computationally efficient feature selection based on linear dependency between input features and output values. In this letter, we consider a feature-wise kernelized Lasso for capturing nonlinear input-output dependency. We first show that with particular choices of kernel functions, nonredundant features with strong statistical dependence on output values can be found in terms of kernel-based independence measures such as the Hilbert-Schmidt independence criterion. We then show that the globally optimal solution can be efficiently computed; this makes the approach scalable to high-dimensional problems. The effectiveness of the proposed method is demonstrated through feature selection experiments for classification and regression with thousands of features.", "title": "" }, { "docid": "7e91dd40445de51570a8c77cf50f7211", "text": "Based on phasor measurement units (PMUs), a synchronphasor system is widely recognized as a promising smart grid measurement system. It is able to provide high-frequency, high-accuracy phasor measurements sampling for Wide Area Monitoring and Control (WAMC) applications.However,the high sampling frequency of measurement data under strict latency constraints introduces new challenges for real time communication. It would be very helpful if the collected data can be prioritized according to its importance such that the existing quality of service (QoS) mechanisms in the communication networks can be leveraged. To achieve this goal, certain anomaly detection functions should be conducted by the PMUs. Inspired by the recent emerging edge-fog-cloud computing hierarchical architecture, which allows computing tasks to be conducted at the network edge, a novel PMU fog is proposed in this paper. Two anomaly detection approaches, Singular Spectrum Analysis (SSA) and K-Nearest Neighbors (KNN), are evaluated in the PMU fog using the IEEE 16-machine 68-bus system. The simulation experiments based on Riverbed Modeler demonstrate that the proposed PMU fog can effectively reduce the data flow end-to-end (ETE) delay without sacrificing data completeness.", "title": "" }, { "docid": "57fcce4eeac895ef56945008e2c4cd59", "text": "BACKGROUND\nComputational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity.\n\n\nMETHODS\nA typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance.\n\n\nRESULTS\nThe symbolic domain model was found to have more than 10(8) states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure.\n\n\nCONCLUSIONS\nOur results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance.", "title": "" }, { "docid": "9bbf9422ae450a17e0c46d14acf3a3e3", "text": "This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework.", "title": "" } ]
scidocsrr
83e7814bfd6edc7b2bf894369f09abb0
Client as a first-class citizen: Practical user-centric network MIMO clustering
[ { "docid": "112ec676f74c22393d06bc23eaae50d8", "text": "Multi-user multiple-input multiple-output (MU-MIMO) is the latest communication technology that promises to linearly increase the wireless capacity by deploying more antennas on access points (APs). However, the large number of MIMO antennas will generate a huge amount of digital signal samples in real time. This imposes a grand challenge on the AP design by multiplying the computation and the I/O requirements to process the digital samples. This paper presents BigStation, a scalable architecture that enables realtime signal processing in large-scale MIMO systems which may have tens or hundreds of antennas. Our strategy to scale is to extensively parallelize the MU-MIMO processing on many simple and low-cost commodity computing devices. Our design can incrementally support more antennas by proportionally adding more computing devices. To reduce the overall processing latency, which is a critical constraint for wireless communication, we parallelize the MU-MIMO processing with a distributed pipeline based on its computation and communication patterns. At each stage of the pipeline, we further use data partitioning and computation partitioning to increase the processing speed. As a proof of concept, we have built a BigStation prototype based on commodity PC servers and standard Ethernet switches. Our prototype employs 15 PC servers and can support real-time processing of 12 software radio antennas. Our results show that the BigStation architecture is able to scale to tens to hundreds of antennas. With 12 antennas, our BigStation prototype can increase wireless capacity by 6.8x with a low mean processing delay of 860μs. While this latency is not yet low enough for the 802.11 MAC, it already satisfies the real-time requirements of many existing wireless standards, e.g., LTE and WCDMA.", "title": "" } ]
[ { "docid": "e19445c2ea8e19002a85ec9ace463990", "text": "In this paper we propose a system that takes attendance of student and maintaining its records in an academic institute automatically. Manually taking the attendance and maintaining it for a long time makes it difficult task as well as wastes a lot of time. For this reason an efficient system is designed. This system takes attendance with the help of a fingerprint sensor module and all the records are saved on a computer. Fingerprint sensor module and LCD screen are dynamic which can move in the room. In order to mark the attendance, student has to place his/her finger on the fingerprint sensor module. On identification of particular student, his attendance record is updated in the database and he/she is notified through LCD screen. In this system we are going to generate Microsoft excel attendance report on computer. This report will generate automatically after 15 days (depends upon user). This report will be sent to the respected HOD, teacher and student’s parents email Id.", "title": "" }, { "docid": "4ad1aa5086c15be3d5ba9d692d1772a2", "text": "We demonstrate the feasibility of detecting pathology in chest x-rays using deep learning approaches based on non-medical learning. Convolutional neural networks (CNN) learn higher level image representations. In this work we explore the features extracted from layers of the CNN along with a set of classical features, including GIST and bag-ofwords (BoW). We show results of classification using each feature set as well as fusing among the features. Finally, we perform feature selection on the collection of features to show the most informative feature set for the task. Results of 0.78-0.95 AUC for various pathologies are shown on a dataset of more than 600 radiographs. This study shows the strength and robustness of the CNN features. We conclude that deep learning with large scale nonmedical image databases may be a good substitute, or addition to domain specific representations which are yet to be available for general medical image recognition tasks.", "title": "" }, { "docid": "98e9d8fb4a04ad141b3a196fe0a9c08b", "text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.", "title": "" }, { "docid": "3ae8865602c53847a0eec298c698a743", "text": "BACKGROUND\nA low ratio of utilization of healthcare services in postpartum women may contribute to maternal deaths during the postpartum period. The maternal mortality ratio is high in the Philippines. The aim of this study was to examine the current utilization of healthcare services and the effects on the health of women in the Philippines who delivered at home.\n\n\nMETHODS\nThis was a cross-sectional analytical study, based on a self-administrated questionnaire, conducted from March 2015 to February 2016 in Muntinlupa, Philippines. Sixty-three postpartum women who delivered at home or at a facility were enrolled for this study. A questionnaire containing questions regarding characteristics, utilization of healthcare services, and abnormal symptoms during postpartum period was administered. To analyze the questionnaire data, the sample was divided into delivery at home and delivery at a facility. Chi-square test, Fisher's exact test, and Mann-Whitney U test were used.\n\n\nRESULTS\nThere were significant differences in the type of birth attendant, area of residence, monthly income, and maternal and child health book usage between women who delivered at home and those who delivered at a facility (P<0.01). There was significant difference in the utilization of antenatal checkup (P<0.01) during pregnancy, whilst there was no significant difference in utilization of healthcare services during the postpartum period. Women who delivered at home were more likely to experience feeling of irritated eyes and headaches, and continuous abdominal pain (P<0.05).\n\n\nCONCLUSION\nFinancial and environmental barriers might hinder the utilization of healthcare services by women who deliver at home in the Philippines. Low utilization of healthcare services in women who deliver at home might result in more frequent abnormal symptoms during postpartum.", "title": "" }, { "docid": "1d074c67dec38a9459450ded74c54288", "text": "The focus of this review is the evolving field of antithrombotic drug therapy for stroke prevention in patients with atrial fibrillation (AF). The current standard of therapy includes warfarin, acenocoumarol and phenprocoumon which have proven efficacy by reducing stroke by 68% against placebo. However, a narrow therapeutic index, wide variation in metabolism, and numerous food and drug interactions have limited their clinical application to only 50% of the indicated population. Newer agents such as direct thrombin inhibitors, factor Xa inhibitors, factor IX inhibitors, tissue factor inhibitors and a novel vitamin K antagonist are being developed to overcome the limitations of current agents. The direct thrombin inhibitor dabigatran is farthest along in development. Further clinical trial testing, and eventual incorporation into clinical practice will depend on safety, efficacy and cost. Development of a novel vitamin K antagonist with better INR control will challenge the newer mechanistic agents in their quest to replace the existing vitamin K antagonists. Till then, the large unfilled gap to replace conventional agents remains open. This review will assess all these agents, and compare their mechanism of action, stage of development and pharmacologic profile.", "title": "" }, { "docid": "373d3549865647bd469b160d60db71c8", "text": "The encoding of time and its binding to events are crucial for episodic memory, but how these processes are carried out in hippocampal–entorhinal circuits is unclear. Here we show in freely foraging rats that temporal information is robustly encoded across time scales from seconds to hours within the overall population state of the lateral entorhinal cortex. Similarly pronounced encoding of time was not present in the medial entorhinal cortex or in hippocampal areas CA3–CA1. When animals’ experiences were constrained by behavioural tasks to become similar across repeated trials, the encoding of temporal flow across trials was reduced, whereas the encoding of time relative to the start of trials was improved. The findings suggest that populations of lateral entorhinal cortex neurons represent time inherently through the encoding of experience. This representation of episodic time may be integrated with spatial inputs from the medial entorhinal cortex in the hippocampus, allowing the hippocampus to store a unified representation of what, where and when. Temporal information that is useful for episodic memory is encoded across a wide range of timescales in the lateral entorhinal cortex, arising inherently from its representation of ongoing experience.", "title": "" }, { "docid": "ed23a782c3e4f03790fb5f7ec95d926c", "text": "This paper presents two WR-3 band (220–325 GHz) filters, one fabricated in metal using high precision computer numerically controlled milling and the other made with metallized SU-8 photoresist technology. Both are based on three coupled resonators, and are designed for a 287.3–295.9-GHz passband, and a 30-dB rejection between 317.7 and 325.9 GHz. The first filter is an extracted pole filter coupled by irises, and is precision milled using the split-block approach. The second filter is composed of three silver-coated SU-8 layers, each 432 μm thick. The filter structures are specially chosen to take advantage of the fabrication processes. When fabrication tolerances are accounted for, very good agreement between measurements and simulations are obtained, with median passband insertion losses of 0.41 and 0.45 dB for the metal and SU-8 devices, respectively. These two filters are potential replacements of frequency selective surface filters used in heterodyne radiometers for unwanted sideband rejection.", "title": "" }, { "docid": "de88ba2471bb33fcabf009049a619679", "text": "This paper presents a novel approach to utilize the CPU resource to facilitate the execution of GPGPU programs on fused CPU-GPU architectures. In our model of fused architectures, the GPU and the CPU are integrated on the same die and share the on-chip L3 cache and off-chip memory, similar to the latest Intel Sandy Bridge and AMD accelerated processing unit (APU) platforms. In our proposed CPU-assisted GPGPU, after the CPU launches a GPU program, it executes a pre-execution program, which is generated automatically from the GPU kernel using our proposed compiler algorithms and contains memory access instructions of the GPU kernel for multiple thread-blocks. The CPU pre-execution program runs ahead of GPU threads because (1) the CPU pre-execution thread only contains memory fetch instructions from GPU kernels and not floating-point computations, and (2) the CPU runs at higher frequencies and exploits higher degrees of instruction-level parallelism than GPU scalar cores. We also leverage the prefetcher at the L2-cache on the CPU side to increase the memory traffic from CPU. As a result, the memory accesses of GPU threads hit in the L3 cache and their latency can be drastically reduced. Since our pre-execution is directly controlled by user-level applications, it enjoys both high accuracy and flexibility. Our experiments on a set of benchmarks show that our proposed pre-execution improves the performance by up to 113% and 21.4% on average.", "title": "" }, { "docid": "2b4f3b7791a4f98d4ce4a7f7b6164573", "text": "Development of reliable and eco-friendly process for the synthesis of metallic nanoparticles is an important step in the field of application of nanotechnology. We have developed modern method by using agriculture waste to synthesize silver nanoparticles by employing an aqueous peel extract of Annona squamosa in AgNO(3). Controlled growth of silver nanoparticles was formed in 4h at room temperature (25°C) and 60°C. AgNPs were irregular spherical in shape and the average particle size was about 35±5 nm and it is consistent with particle size obtained by XRD Scherer equation.", "title": "" }, { "docid": "0e965b8941ddb47760300a35b80545be", "text": "Pathological lung segmentation (PLS) is an important, yet challenging, medical image application due to the wide variability of pathological lung appearance and shape. Because PLS is often a prerequisite for other imaging analytics, methodological simplicity and generality are key factors in usability. Along those lines, we present a bottomup deep-learning based approach that is expressive enough to handle variations in appearance, while remaining unaffected by any variations in shape. We incorporate the deeply supervised learning framework, but enhance it with a simple, yet effective, progressive multi-path scheme, which more reliably merges outputs from different network stages. The result is a deep model able to produce finer detailed masks, which we call progressive holistically-nested networks (P-HNNs). Using extensive cross-validation, our method is tested on a multi-institutional dataset comprising 929 CT scans (848 publicly available), of pathological lungs, reporting mean dice scores of 0.985 and demonstrating significant qualitative and quantitative improvements over state-of-the art approaches.", "title": "" }, { "docid": "ccb779859fe08d4ee58e016597c93e83", "text": "Deep convolutional neural networks (CNN) is highly efficient in image recognition tasks such as MNIST digit recognition. Accelerators based on FPGA platform are proposed since general purpose processor is disappointing in terms of performance when dealing with recognition tasks. Recently, an optimized FPGA-based accelerator design (work 1) has been proposed claiming best performance compared with existing implementations. But as the author acknowledged, performance could be better if fixed point presentation and computation elements had been used. Inspired by its methodology in implementing the Alexnet convolutional neural network, we implement a 5-layer accelerator for MNIST digit recognition task using the same Vivado HLS tool but using 11-bits fixed point precision on a Virtex7 FPGA. We compare performance on FPGA platform with the performance of the target CNN on MATLAB/CPU platform; we reach a speedup of 16.42. Our implementation runs at 150MHz and reaches a peak performance of 16.58 GMACS. Since our target CNN is simpler, we use much less resource than work 1 has used.", "title": "" }, { "docid": "32a3ed78cd8abe70977ef28bede467fd", "text": "Plagiarism in the sense of “theft of intellectual property” has been around for as long as humans have produced work of art and research. However, easy access to the Web, large databases, and telecommunication in general, has turned plagiarism into a serious problem for publishers, researchers and educational institutions. In this paper, we concentrate on textual plagiarism (as opposed to plagiarism in music, paintings, pictures, maps, technical drawings, etc.). We first discuss the complex general setting, then report on some results of plagiarism detection software and finally draw attention to the fact that any serious investigation in plagiarism turns up rather unexpected side-effects. We believe that this paper is of value to all researchers, educators and students and should be considered as seminal work that hopefully will encourage many still deeper investigations.", "title": "" }, { "docid": "98c72706e0da844c80090c1ed5f3abeb", "text": "Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can “interpolate”: By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.", "title": "" }, { "docid": "208b646525e63d231a4411dd8fbdb974", "text": "Most safety studies come to the conclusion that human error is the main cause of accidents. Nevertheless, such a conclusion has not proved to be efficient in its capacity to offer adequate means to fight again this error. In a purpose of better qualifying accident causation in TRACE, so-called 'human error' is analyzed here from literature review and in-depth accident data with the aim of going further than such a simple statement. The present report is aimed at investigating the different types of 'errors' with the help of a classification model formalizing typical 'Human Functional Failures' (HFF) involved in road accidents. These failures are not seen as the causes of road accidents, but as the result of the driving system malfunctions which can be found in its components (user/road/vehicle) and their defective interactions (unfitness of an element with another). Such a view tries to extend 'accident causation' analysis toward understanding, not only the causes, but also the processes involved in the accident production. So the purpose is to go further than establishing the facts, toward making a diagnosis on their production process. The usefulness of this diagnosis is to help defining countermeasures suited to the malfunction processes in question. This report D5.1, addressed to human functional failures, is in strong connection with Trace report D5.2 devoted to the factors (human and others) and situations of these failures. Trace deliverable D5.3 stresses the most recurrent typical scenarios in which the human functional failures are found. D5.4 is enlarging the questioning of 'human factors' from the side of sociological and cultural backgrounds determining accidental driving behaviour. All these reports are parts of WP5, whose main objective is to provide operational Work Packages of TRACE project with methodological support concerning 'human factors' aspects involved in road accidents. Keyword list: Human error Human factors Accident study Ergonomics Cognitive Psychology", "title": "" }, { "docid": "9d36947ff5f794942e153c21cdfc3a53", "text": "It is a well-established fact that corruption is a widespread phenomenon and it is widely acknowledged because of negative impact on economy and society. An important aspect of corruption is that two parties act separately or jointly in order to further their own interests at the expense of society. To strengthen prevent corruption, most of countries have construct special organization. The paper presents a new measure based on introducing game theory as an analytical tool for analyzing the relation between anti-corruption and corruption. Firstly, the paper introduces the corruption situation in China, gives the definition of the game theory and studies government anti-corruption activity through constructing the game theoretic models between anti-corruption and corruption. The relation between supervisor and the anti-corruption will be explained next. A thorough analysis of the mechanism of informant system has been made accordingly in the third part. At last, some suggestions for preventing and fight corruption are put forward.", "title": "" }, { "docid": "afa5296bca23dbcf138b7fc0ae0c9dd7", "text": "Learning a natural language interface for database tables is a challenging task that involves deep language understanding and multi-step reasoning. The task is often approached by mapping natural language queries to logical forms or programs that provide the desired response when executed on the database. To our knowledge, this paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset. We enhance the objective function of Neural Programmer, a neural network with built-in discrete operations, and apply it on WikiTableQuestions, a natural language question-answering dataset. The model is trained end-to-end with weak supervision of question-answer pairs, and does not require domain-specific grammars, rules, or annotations that are key elements in previous approaches to program induction. The main experimental result in this paper is that a single Neural Programmer model achieves 34.2% accuracy using only 10,000 examples with weak supervision. An ensemble of 15 models, with a trivial combination technique, achieves 37.7% accuracy, which is competitive to the current state-of-the-art accuracy of 37.1% obtained by a traditional natural language semantic parser. 1 BACKGROUND AND INTRODUCTION Databases are a pervasive way to store and access knowledge. However, it is not straightforward for users to interact with databases since it often requires programming skills and knowledge about database schemas. Overcoming this difficulty by allowing users to communicate with databases via natural language is an active research area. The common approach to this task is by semantic parsing, which is the process of mapping natural language to symbolic representations of meaning. In this context, semantic parsing yields logical forms or programs that provide the desired response when executed on the databases (Zelle & Mooney, 1996). Semantic parsing is a challenging problem that involves deep language understanding and reasoning with discrete operations such as counting and row selection (Liang, 2016). The first learning methods for semantic parsing require expensive annotation of question-program pairs (Zelle & Mooney, 1996; Zettlemoyer & Collins, 2005). This annotation process is no longer necessary in the current state-of-the-art semantic parsers that are trained using only question-answer pairs (Liang et al., 2011; Kwiatkowski et al., 2013; Krishnamurthy & Kollar, 2013; Pasupat & Liang, 2015). However, the performance of these methods still heavily depends on domain-specific grammar or pruning strategies to ease program search. For example, in a recent work on building semantic parsers for various domains, the authors hand-engineer a separate grammar for each domain (Wang et al., 2015). Recently, many neural network models have been developed for program induction (Andreas et al., 2016; Jia & Liang, 2016; Reed & Freitas, 2016; Zaremba et al., 2016; Yin et al., 2015), despite ∗Work done at Google Brain. 1 ar X iv :1 61 1. 08 94 5v 4 [ cs .C L ] 2 M ar 2 01 7 Published as a conference paper at ICLR 2017 Operations Count Select ArgMax ArgMin ... ... > < Print Neural Network What was the total number of goals scored in 2005 Row Selector Scalar Answer Lookup Answer timestep t", "title": "" }, { "docid": "4ad106897a19830c80a40e059428f039", "text": "In 1972, and later in 1979, at the peak of the golden era of Good Old Fashioned Artificial Intelligence (GOFAI), the voice of philosopher Hubert Dreyfus made itself heard as one of the few calls against the hubristic programme of modelling the human mind as a mechanism of symbolic information processing (Dreyfus, 1979). He did not criticise particular solutions to specific problems; instead his deep concern was with the very foundations of the programme. His critical stance was unusual, at least for most GOFAI practitioners, in that it did not rely on technical issues, but on a philosophical position emanating from phenomenology and existentialism, a fact contributing to his claims being largely ignored or dismissed for a long time by the AI community. But, for the most part, he was eventually proven right. AI’s over-reliance on worldmodelling and planning went against the evidence provided by phenomenology of human activity as situated and with a clear and ever-present focus of practical concern – the body and not some algorithm is the originating locus of intelligent activity (if by intelligent we understand intentional, directed and flexible), and the world is not the sum total of all available facts, but the world-as-it-is-for-this-body. Such concerns were later vindicated by the Brooksian revolution in autonomous robotics with its foundations on embodiment, situatedness and de-centralised mechanisms (Brooks, 1991). Brooks’ practical and methodological preoccupations – building robots largely based on biologically plausible principles and capable of acting in the real world – proved parallel, despite his claim that his approach was not “German philosophy”, to issues raised by Dreyfus. Putting robotics back as the acid test of AI, as oppossed to playing chess and proving theorems, is now often seen as a positive response to Dreyfus’ point that AI was unable to capture true meaning by the summing of meaningless processes. This criticism was later devastatingly recast in Searle’s Chinese Room argument (1980), and extended by Harnad’s Symbol Grounding Problem (1990). Meaningful activity – that is, meaningful for the agent and not only for the designer – must obtain through sensorimotor grounding in the agent’s world, and for this both a body and world are needed. Following these developments, work in autonomous robotics and new AI since the 1990s rebelled against pure connectionism because of its lack of biological plausibility and also because most of connectionist research was carried out in vacuo – it was compellingly argued that neural network models as simple input/output processing units are meaningless for modelling the cognitive capabilities of insects, let alone humans, unless they are embedded in a closed sensorimotor loop of interaction with a world (Cliff, 1991). Objective meaning, that is meaningful internal states and states of the world, can only obtain in an embodied agent whose effector and sensor activities become coordinated", "title": "" }, { "docid": "8b6d3b5fb8af809619119ee0f75cb3c6", "text": "This paper mainly discusses how to use histogram projection and LBDM (Learning Based Digital Matting) to extract a tongue from a medical image, which is one of the most important steps in diagnosis of traditional Chinese Medicine. We firstly present an effective method to locate the tongue body, getting the convinced foreground and background area in form of trimap. Then, use this trimap as the input for LBDM algorithm to implement the final segmentation. Experiment was carried out to evaluate the proposed scheme, using 480 samples of pictures with tongue, the results of which were compared with the corresponding ground truth. Experimental results and analysis demonstrated the feasibility and effectiveness of the proposed algorithm.", "title": "" }, { "docid": "c11b77f1392c79f4a03f9633c8f97f4d", "text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.", "title": "" } ]
scidocsrr
926fbff178cd935caf21baf3e325c9c4
Chizpurfle: A Gray-Box Android Fuzzer for Vendor Service Customizations
[ { "docid": "049c9e3abf58bfd504fa0645bb4d1fdc", "text": "The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.", "title": "" } ]
[ { "docid": "46a1dd05e29e206b9744bf15d48f5a5e", "text": "In this paper, we propose a distributed version of the Hungarian method to solve the well-known assignment problem. In the context of multirobot applications, all robots cooperatively compute a common assignment that optimizes a given global criterion (e.g., the total distance traveled) within a finite set of local computations and communications over a peer-to-peer network. As a motivating application, we consider a class of multirobot routing problems with “spatiotemporal” constraints, i.e., spatial targets that require servicing at particular time instants. As a means of demonstrating the theory developed in this paper, the robots cooperatively find online suboptimal routes by applying an iterative version of the proposed algorithm in a distributed and dynamic setting. As a concrete experimental test bed, we provide an interactive “multirobot orchestral” framework, in which a team of robots cooperatively plays a piece of music on a so-called orchestral floor.", "title": "" }, { "docid": "2f0ad3cc279dfb4a10f4fbad1b2f1186", "text": "OBJECTIVE\nTo assess the feasibility and robustness of an asynchronous and non-invasive EEG-based Brain-Computer Interface (BCI) for continuous mental control of a wheelchair.\n\n\nMETHODS\nIn experiment 1 two subjects were asked to mentally drive both a real and a simulated wheelchair from a starting point to a goal along a pre-specified path. Here we only report experiments with the simulated wheelchair for which we have extensive data in a complex environment that allows a sound analysis. Each subject participated in five experimental sessions, each consisting of 10 trials. The time elapsed between two consecutive experimental sessions was variable (from 1h to 2months) to assess the system robustness over time. The pre-specified path was divided into seven stretches to assess the system robustness in different contexts. To further assess the performance of the brain-actuated wheelchair, subject 1 participated in a second experiment consisting of 10 trials where he was asked to drive the simulated wheelchair following 10 different complex and random paths never tried before.\n\n\nRESULTS\nIn experiment 1 the two subjects were able to reach 100% (subject 1) and 80% (subject 2) of the final goals along the pre-specified trajectory in their best sessions. Different performances were obtained over time and path stretches, what indicates that performance is time and context dependent. In experiment 2, subject 1 was able to reach the final goal in 80% of the trials.\n\n\nCONCLUSIONS\nThe results show that subjects can rapidly master our asynchronous EEG-based BCI to control a wheelchair. Also, they can autonomously operate the BCI over long periods of time without the need for adaptive algorithms externally tuned by a human operator to minimize the impact of EEG non-stationarities. This is possible because of two key components: first, the inclusion of a shared control system between the BCI system and the intelligent simulated wheelchair; second, the selection of stable user-specific EEG features that maximize the separability between the mental tasks.\n\n\nSIGNIFICANCE\nThese results show the feasibility of continuously controlling complex robotics devices using an asynchronous and non-invasive BCI.", "title": "" }, { "docid": "231732058c9eb87d953eb457b7298fb8", "text": "The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.", "title": "" }, { "docid": "6e4d20ed39b2257fd9d8b844ab1020c8", "text": "In this paper, we address the problem of shadow detection and removal from single images of natural scenes. Differently from traditional methods that explore pixel or edge information, we employ a region-based approach. In addition to considering individual regions separately, we predict relative illumination conditions between segmented regions from their appearances and perform pairwise classification based on such information. Classification results are used to build a graph of segments, and graph-cut is used to solve the labeling of shadow and nonshadow regions. Detection results are later refined by image matting, and the shadow-free image is recovered by relighting each pixel based on our lighting model. We evaluate our method on the shadow detection dataset in Zhu et al. . In addition, we created a new dataset with shadow-free ground truth images, which provides a quantitative basis for evaluating shadow removal. We study the effectiveness of features for both unary and pairwise classification.", "title": "" }, { "docid": "e1f8ac0ee1a5ec2175f3420e5874722d", "text": "In this paper we present an approach for the task of author profiling. We propose a coherent grouping of features combined with appropriate preprocessing steps for each group. The groups we used were stylometric and structural, featuring among others, trigrams and counts of twitter specific characteristics. We address gender and age prediction as a classification task and personality prediction as a regression problem using Support Vector Machines and Support Vector Machine Regression respectively on documents created by joining each user’s tweets.", "title": "" }, { "docid": "505b1fc76ef4e3fa6b0d5101e3dfd4fb", "text": "In this work the problem of guided improvisation is approached and elaborated; then a new method, Variable Markov Oracle, for guided music synthesis is proposed as the first step to tackle the guided improvisation problem. Variable Markov Oracle is based on previous results from Audio Oracle, which is a fast indexing and recombination method of repeating sub-clips in an audio signal. The newly proposed Variable Markov Oracle is capable of identifying inherent datapoint clusters in an audio signal while tracking the sequential relations among clusters at the same time. With a target audio signal indexed by Variable Markov Oracle, a query-matching algorithm is devised to synthesize new music materials by recombination of the target audio matched to a query audio. This approach makes the query-matching algorithm a solution to the guided music synthesis problem. The query-matching algorithm is efficient and intelligent since it follows the inherent clusters discovered by Variable Markov Oracle, creating a query-by-content result which allows numerous applications in concatenative synthesis, machine improvisation and interactive music system. Examples of using Variable Markov Oracle to synthesize new musical materials based on given music signals in the style of", "title": "" }, { "docid": "582b19b8dfb01928d82cbccf7497186b", "text": "Test coverage is an important metric of software quality, since it indicates thoroughness of testing. In industry, test coverage is often measured as statement coverage. A fundamental problem of software testing is how to achieve higher statement coverage faster, and it is a difficult problem since it requires testers to cleverly find input data that can steer execution sooner toward sections of application code that contain more statements.\n We created a novel fully automatic approach for aChieving higher stAtement coveRage FASTer (CarFast), which we implemented and evaluated on twelve generated Java applications whose sizes range from 300 LOC to one million LOC. We compared CarFast with several popular test case generation techniques, including pure random, adaptive random, and Directed Automated Random Testing (DART). Our results indicate with strong statistical significance that when execution time is measured in terms of the number of runs of the application on different input test data, CarFast outperforms the evaluated competitive approaches on most subject applications.", "title": "" }, { "docid": "609b1df5196de8809b6293a481868c93", "text": "In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively.", "title": "" }, { "docid": "b32cd3e2763400dfc96c61e489673a6b", "text": "This paper presents a hybrid cascaded multilevel inverter for electric vehicles (EV) / hybrid electric vehicles (HEV) and utility interface applications. The inverter consists of a standard 3-leg inverter (one leg for each phase) and H-bridge in series with each inverter leg. It can use only a single DC power source to supply a standard 3-leg inverter along with three full H-bridges supplied by capacitors or batteries. Both fundamental frequency and high switching frequency PWM methods are used for the hybrid multilevel inverter. An experimental 5 kW prototype inverter is built and tested. The above two switching control methods are validated and compared experimentally.", "title": "" }, { "docid": "c6780317e8b4b41a27d8be813d51e050", "text": "The neural mechanisms by which intentions are transformed into actions remain poorly understood. We investigated the network mechanisms underlying spontaneous voluntary decisions about where to focus visual-spatial attention (willed attention). Graph-theoretic analysis of two independent datasets revealed that regions activated during willed attention form a set of functionally-distinct networks corresponding to the frontoparietal network, the cingulo-opercular network, and the dorsal attention network. Contrasting willed attention with instructed attention (where attention is directed by external cues), we observed that the dorsal anterior cingulate cortex was allied with the dorsal attention network in instructed attention, but shifted connectivity during willed attention to interact with the cingulo-opercular network, which then mediated communications between the frontoparietal network and the dorsal attention network. Behaviorally, greater connectivity in network hubs, including the dorsolateral prefrontal cortex, the dorsal anterior cingulate cortex, and the inferior parietal lobule, was associated with faster reaction times. These results, shown to be consistent across the two independent datasets, uncover the dynamic organization of functionally-distinct networks engaged to support intentional acts.", "title": "" }, { "docid": "722a2b6f773473d032d202ce7aded43c", "text": "Detection of skin cancer in the earlier stage is very Important and critical. In recent days, skin cancer is seen as one of the most Hazardous form of the Cancers found in Humans. Skin cancer is found in various types such as Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most unpredictable. The detection of Melanoma cancer in early stage can be helpful to cure it. Computer vision can play important role in Medical Image Diagnosis and it has been proved by many existing systems. In this paper, we present a computer aided method for the detection of Melanoma Skin Cancer using Image processing tools. The input to the system is the skin lesion image and then by applying novel image processing techniques, it analyses it to conclude about the presence of skin cancer. The Lesion Image analysis tools checks for the various Melanoma parameters Like Asymmetry, Border, Colour, Diameter, (ABCD) etc. by texture, size and shape analysis for image segmentation and feature stages. The extracted feature parameters are used to classify the image as Normal skin and Melanoma cancer lesion.", "title": "" }, { "docid": "bf5874dc1fc1c968d7c41eb573d8d04a", "text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.", "title": "" }, { "docid": "a62cbb8f7b3a634ba1efe9dbe679fac6", "text": "Cloud Computing offers virtualized computing, storage, and networking resources, over the Internet, to organizations and individual users in a completely dynamic way. These cloud resources are cheaper, easier to manage, and more elastic than sets of local, physical, ones. This encourages customers to outsource their applications and services to the cloud. The migration of both data and applications outside the administrative domain of customers into a shared environment imposes transversal, functional problems across distinct platforms and technologies. This article provides a contemporary discussion of the most relevant functional problems associated with the current evolution of Cloud Computing, mainly from the network perspective. The paper also gives a concise description of Cloud Computing concepts and technologies. It starts with a brief history about cloud computing, tracing its roots. Then, architectural models of cloud services are described, and the most relevant products for Cloud Computing are briefly discussed along with a comprehensive literature review. The paper highlights and analyzes the most pertinent and practical network issues of relevance to the provision of high-assurance cloud services through the Internet, including security. Finally, trends and future research directions are also presented.", "title": "" }, { "docid": "25bb739c67fed1a4a0573ef1dff4d89e", "text": "Symbolic execution is a well-known program analysis technique which represents program inputs with symbolic values instead of concrete, initialized, data and executes the program by manipulating program expressions involving the symbolic values. Symbolic execution has been proposed over three decades ago but recently it has found renewed interest in the research community, due in part to the progress in decision procedures, availability of powerful computers and new algorithmic developments. We provide here a survey of some of the new research trends in symbolic execution, with particular emphasis on applications to test generation and program analysis. We first describe an approach that handles complex programming constructs such as input recursive data structures, arrays, as well as multithreading. Furthermore, we describe recent hybrid techniques that combine concrete and symbolic execution to overcome some of the inherent limitations of symbolic execution, such as handling native code or availability of decision procedures for the application domain. We follow with a discussion of techniques that can be used to limit the (possibly infinite) number of symbolic configurations that need to be analyzed for the symbolic execution of looping programs. Finally, we give a short survey of interesting new applications, such as predictive testing, invariant inference, program repair, analysis of parallel numerical programs and differential symbolic execution.", "title": "" }, { "docid": "fdd998012aa9b76ba9fe4477796ddebb", "text": "Low-power wireless devices must keep their radio transceivers off as much as possible to reach a low power consumption, but must wake up often enough to be able to receive communication from their neighbors. This report describes the ContikiMAC radio duty cycling mechanism, the default radio duty cycling mechanism in Contiki 2.5, which uses a power efficient wake-up mechanism with a set of timing constraints to allow device to keep their transceivers off. With ContikiMAC, nodes can participate in network communication yet keep their radios turned off for roughly 99% of the time. This report describes the ContikiMAC mechanism, measures the energy consumption of individual ContikiMAC operations, and evaluates the efficiency of the fast sleep and phase-lock optimizations.", "title": "" }, { "docid": "a56c98284e1ac38e9aa2e4aa4b7a87a9", "text": "Background: The extrahepatic biliary tree with the exact anatomic features of the arterial supply observed by laparoscopic means has not been described heretofore. Iatrogenic injuries of the extrahepatic biliary tree and neighboring blood vessels are not rare. Accidents involving vessels or the common bile duct during laparoscopic cholecystectomy, with or without choledocotomy, can be avoided by careful dissection of Calot's triangle and the hepatoduodenal ligament. Methods: We performed 244 laparoscopic cholecystectomies over a 2-year period between January 1, 1995 and January 1, 1997. Results: In 187 of 244 consecutive cases (76.6%), we found a typical arterial supply anteromedial to the cystic duct, near the sentinel cystic lymph node. In the other cases, there was an atypical arterial supply, and 27 of these cases (11.1%) had no cystic artery in Calot's triangle. A typical blood supply and accessory arteries were observed in 18 cases (7.4%). Conclusion: Young surgeons who are not yet familiar with the handling of an anatomically abnormal cystic blood supply need to be more aware of the precise anatomy of the extrahepatic biliary tree.", "title": "" }, { "docid": "7a9387636f01bb462aef2d3b32627c67", "text": "The Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), a fleet of quadrotor helicopters, has been developed as a testbed for novel algorithms that enable autonomous operation of aerial vehicles. This paper develops an autonomous vehicle trajectory tracking algorithm through cluttered environments for the STARMAC platform. A system relying on a single optimization must trade off the complexity of the planned path with the rate of update of the control input. In this paper, a trajectory tracking controller for quadrotor helicopters is developed to decouple the two problems. By accepting as inputs a path of waypoints and desired velocities, the control input can be updated frequently to accurately track the desired path, while the path planning occurs as a separate process on a slower timescale. To enable the use of planning algorithms that do not consider dynamic feasibility or provide feedforward inputs, a computationally efficient algorithm using space-indexed waypoints is presented to modify the speed profile of input paths to guarantee feasibility of the planned trajectory and minimum time traversal of the planned. The algorithm is an efficient alternative to formulating a nonlinear optimization or mixed integer program. Both indoor and outdoor flight test results are presented for path tracking on the STARMAC vehicles.", "title": "" }, { "docid": "a1c2074b45adacc12437f60cbb491db1", "text": "Building extraction from remotely sensed imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases and several other geospatial applications. Several published contributions are dedicated to the applications of Deep Convolutional Neural Network (DCNN) for building extraction using aerial/satellite imagery exists; however, in all these contributions a good accuracy is always paid at the price of extremely complex and large network architectures. In this paper, we present an enhanced Fully Convolutional Network (FCN) framework especially molded for building extraction of remotely sensed images by applying Conditional Random Field (CRF). The main purpose here is to propose a framework which balances maximum accuracy with less network complexity. The modern activation function called Exponential Linear Unit (ELU) is applied to improve the performance of the Fully Convolutional Network (FCN), resulting in more, yet accurate building prediction. To further reduce the noise (false classified buildings) and to sharpen the boundary of the buildings, a post processing CRF is added at the end of the adopted Convolutional Neural Network (CNN) framework. The experiments were conducted on Massachusetts building aerial imagery. The results show that our proposed framework outperformed FCN baseline, which is the existing baseline framework for semantic segmentation, in term of performance measure, the F1-score and Intersection Over Union (IoU) measure. Additionally, the proposed method stood superior to the pre-existing classifier for building extraction using the same dataset in terms of performance measure and network complexity at once.", "title": "" }, { "docid": "f1441fc1d02078384fd8bb8f546199e0", "text": "The performance of portable and wearable biosensors is highly influenced by motion artifact. In this paper, a novel real-time adaptive algorithm is proposed for accurate motion-tolerant extraction of heart rate (HR) and pulse oximeter oxygen saturation (SpO2) from wearable photoplethysmographic (PPG) biosensors. The proposed algorithm removes motion artifact due to various sources including tissue effect and venous blood changes during body movements and provides noise-free PPG waveforms for further feature extraction. A two-stage normalized least mean square adaptive noise canceler is designed and validated using a novel synthetic reference signal at each stage. Evaluation of the proposed algorithm is done by Bland-Altman agreement and correlation analyses against reference HR from commercial ECG and SpO2 sensors during standing, walking, and running at different conditions for a single- and multisubject scenarios. Experimental results indicate high agreement and high correlation (more than 0.98 for HR and 0.7 for SpO2 extraction) between measurements by reference sensors and our algorithm.", "title": "" } ]
scidocsrr
f9b1ef0c013e676d96be3ec2556744c4
Vehicle Active Steering Control System Based on Human Mechanical Impedance Properties of the Arms
[ { "docid": "dbfdb9251e8b9738eaebae3bcd708926", "text": "Stable Haptic Interaction with Virtual Environments", "title": "" } ]
[ { "docid": "741aefcfa90a6a4ddc08ea293f13ec88", "text": "The Timeline Followback (TLFB), a retrospective calendar-based measure of daily substance use, was initially developed to obtain self-reports of alcohol use. Since its inception it has undergone extensive evaluation across diverse populations and is considered the most psychometrically sound self-report measure of drinking. Although the TLFB has been extended to other behaviors, its psychometric evaluation with other addictive behaviors has not been as extensive as for alcohol use. The present study evaluated the test-retest reliability of the TLFB for cocaine, cannabis, and cigarette use for participants recruited from outpatient alcohol and drug treatment programs and the general community across intervals ranging from 30 to 360 days prior to the interview. The dependent measure for cigarette smokers and cannabis users was daily use of cigarettes and joints, respectively, and for cocaine users it was a \"Yes\" or \"No\" regarding cocaine use for each day. The TLFB was administered in different formats for different drug types. Different interviewers conducted the two interviews. The TLFB collected highly reliable information about participants' daily use of cocaine, cannabis, and cigarettes from 30, 90, to 360 days prior to the interview. Findings from this study not only suggest that shorter time intervals (e.g., 90 days) can be used with little loss of accuracy, but also add to the growing literature that the TLFB can be used with confidence to collect psychometrically sound information about substance use (i.e., cocaine, cannabis, cigarettes) other than alcohol in treatment- and nontreatment-seeking populations for intervals from ranging up to 12 months prior to the interview.", "title": "" }, { "docid": "c3e7a2d7689ef31140b44d4acdc196c3", "text": "Path planning for autonomous vehicles in dynamic environments is an important but challenging problem, due to the constraints of vehicle dynamics and existence of surrounding vehicles. Typical trajectories of vehicles involve different modes of maneuvers, including lane keeping, lane change, ramp merging, and intersection crossing. There exist prior arts using the rule-based high-level decision making approaches to decide the mode switching. Instead of using explicit rules, we propose a unified path planning approach using Model Predictive Control (MPC), which automatically decides the mode of maneuvers. To ensure safety, we model surrounding vehicles as polygons and develop a type of constraints in MPC to enforce the collision avoidance between the ego vehicle and surrounding vehicles. To achieve comfortable and natural maneuvers, we include a lane-associated potential field in the objective function of the MPC. We have simulated the proposed method in different test scenarios and the results demonstrate the effectiveness of the proposed approach in automatically generating reasonable maneuvers while guaranteeing the safety of the autonomous vehicle.", "title": "" }, { "docid": "f4166e4121dbd6f6ab209e6d99aac63f", "text": "In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.", "title": "" }, { "docid": "f69ba8c401cd61057888dfa023bfee30", "text": "Since its introduction, the Nintendo Wii remote has become one of the world's most sophisticated and common input devices. Combining its impressive capability with a low cost and high degree of accessibility make it an ideal platform for exploring a variety of interaction research concepts. The author describes the technology inside the Wii remote, existing interaction techniques, what's involved in creating custom applications, and several projects ranging from multiobject tracking to spatial augmented reality that challenge the way its developers meant it to be used.", "title": "" }, { "docid": "5868ec5c17bf7349166ccd0600cc6b07", "text": "Secure devices are often subject to attacks and behavioural analysis in order to inject faults on them and/or extract otherwise secret information. Glitch attacks, sudden changes on the power supply rails, are a common technique used to inject faults on electronic devices. Detectors are designed to catch these attacks. As the detectors become more efficient, new glitches that are harder to detect arise. Common glitch detection approaches, such as directly monitoring the power rails, can potentially find it hard to detect fast glitches, as these become harder to differentiate from noise. This paper proposes a design which, instead of monitoring the power rails, monitors the effect of a glitch on a sensitive circuit, hence reducing the risk of detecting noise as glitches.", "title": "" }, { "docid": "e390d922f802267ac4e7bd336080e2ca", "text": "Assessment as a dynamic process produces data that reasonable conclusions are derived by stakeholders for decision making that expectedly impact on students' learning outcomes. The data mining methodology while extracting useful, valid patterns from higher education database environment contribute to proactively ensuring students maximize their academic output. This paper develops a methodology by the derivation of performance prediction indicators to deploying a simple student performance assessment and monitoring system within a teaching and learning environment by mainly focusing on performance monitoring of students' continuous assessment (tests) and examination scores in order to predict their final achievement status upon graduation. Based on various data mining techniques (DMT) and the application of machine learning processes, rules are derived that enable the classification of students in their predicted classes. The deployment of the prototyped solution, integrates measuring, 'recycling' and reporting procedures in the new system to optimize prediction accuracy.", "title": "" }, { "docid": "ce17d6e994c86fcc2ea964df996de397", "text": "We motivate and review the definition of differential privacy, survey some results on differentially private statistical estimators, and outline a research agenda. This survey is based on two presentations given by the authors at an NCHS/CDC sponsored workshop on data privacy in May 2008.", "title": "" }, { "docid": "ecdb103e650be2afc4192979a2463af0", "text": "We have developed an F-band (90 to 140 GHz) bidirectional amplifier MMIC using a 75-nm InP HEMT technology for short-range millimeter-wave multi-gigabit communication systems. Inherent symmetric common-gate transistors and parallel circuits consisting of an inductor and a switch realizes a bidirectional operation with a wide bandwidth of over 50 GHz. Small signal gains of 12-15 dB and 9-12 dB were achieved in forward and reverse directions, respectively. Fractional bandwidths of the developed bidirectional amplifier were 39% for the forward direction and 32% for the reverse direction, which were almost double as large as those of conventional bidirectional amplifiers. The power consumption of the bidirectional amplifier was 15 mW under a 2.4-V supply. The chip measures 0.70 × 0.65 mm. The simulated NF is lower than 5 dB, and Psat is larger than 5 dBm. The use of this bidirectional amplifier provides miniaturization of the multi-gigabit communication systems and eliminates signal switching loss.", "title": "" }, { "docid": "2079bd806c3b6b9de28b0a3d158f63f3", "text": "Beam search is a desirable choice of test-time decoding algorithm for neural sequence models because it potentially avoids search errors made by simpler greedy methods. However, typical cross entropy training procedures for these models do not directly consider the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined, this “direct loss” objective is itself discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.", "title": "" }, { "docid": "4cfd4f09a88186cb7e5f200e340d1233", "text": "Keyword spotting (KWS) aims to detect predefined keywords in continuous speech. Recently, direct deep learning approaches have been used for KWS and achieved great success. However, these approaches mostly assume fixed keyword vocabulary and require significant retraining efforts if new keywords are to be detected. For unrestricted vocabulary, HMM based keywordfiller framework is still the mainstream technique. In this paper, a novel deep learning approach is proposed for unrestricted vocabulary KWS based on Connectionist Temporal Classification (CTC) with Long Short-Term Memory (LSTM). Here, an LSTM is trained to discriminant phones with the CTC criterion. During KWS, an arbitrary keyword can be specified and it is represented by one or more phone sequences. Due to the property of peaky phone posteriors of CTC, the LSTM can produce a phone lattice. Then, a fast substring matching algorithm based on minimum edit distance is used to search the keyword phone sequence on the phone lattice. The approach is highly efficient and vocabulary independent. Experiments showed that the proposed approach can achieve significantly better results compared to a DNN-HMM based keyword-filler decoding system. In addition, the proposed approach is also more efficient than the DNN-HMM KWS baseline.", "title": "" }, { "docid": "637ca0ccdc858c9e84ffea1bd3531024", "text": "We propose a method to facilitate search through the storyline of TV series episodes. To this end, we use human written, crowdsourced descriptions—plot synopses—of the story conveyed in the video. We obtain such synopses from websites such as Wikipedia and propose various methods to align each sentence of the plot to shots in the video. Thus, the semantic story-based video retrieval problem is transformed into a much simpler text-based search. Finally, we return the set of shots aligned to the sentences as the video snippet corresponding to the query. The alignment is performed by first computing a similarity score between every shot and sentence through cues such as character identities and keyword matches between plot synopses and subtitles. We then formulate the alignment as an optimization problem and solve it efficiently using dynamic programming. We evaluate our methods on the fifth season of a TV series Buffy the Vampire Slayer and show encouraging results for both the alignment and the retrieval of story events.", "title": "" }, { "docid": "3dee885a896e9864ff06b546d64f6df1", "text": "BACKGROUND\nThe 12-item Short Form Health Survey (SF-12) as a shorter alternative of the SF-36 is largely used in health outcomes surveys. The aim of this study was to validate the SF-12 in Iran.\n\n\nMETHODS\nA random sample of the general population aged 15 years and over living in Tehran, Iran completed the SF-12. Reliability was estimated using internal consistency and validity was assessed using known groups comparison and convergent validity. In addition, the factor structure of the questionnaire was extracted by performing both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).\n\n\nRESULTS\nIn all, 5587 individuals were studied (2721 male and 2866 female). The mean age and formal education of the respondents were 35.1 (SD = 15.4) and 10.2 (SD = 4.4) years respectively. The results showed satisfactory internal consistency for both summary measures, that are the Physical Component Summary (PCS) and the Mental Component Summary (MCS); Cronbach's alpha for PCS-12 and MCS-12 was 0.73 and 0.72, respectively. Known-groups comparison showed that the SF-12 discriminated well between men and women and those who differed in age and educational status (P < 0.001). In addition, correlations between the SF-12 scales and single items showed that the physical functioning, role physical, bodily pain and general health subscales correlated higher with the PCS-12 score, while the vitality, social functioning, role emotional and mental health subscales more correlated with the MCS-12 score lending support to its good convergent validity. Finally the principal component analysis indicated a two-factor structure (physical and mental health) that jointly accounted for 57.8% of the variance. The confirmatory factory analysis also indicated a good fit to the data for the two-latent structure (physical and mental health).\n\n\nCONCLUSION\nIn general the findings suggest that the SF-12 is a reliable and valid measure of health related quality of life among Iranian population. However, further studies are needed to establish stronger psychometric properties for this alternative form of the SF-36 Health Survey in Iran.", "title": "" }, { "docid": "e6f37f7b73c6d511f38b4adb4b7938e0", "text": "Context: Every software development project uses folders to organize software artifacts. Goal: We would like to understand how folders are used and what ramifications different uses may have. Method: In this paper we study the frequency of folders used by 140k Github projects and use regression analysis to model how folder use is related to project popularity, i.e., the extent of forking. Results: We find that the standard folders, such as document, testing, and examples, are not only among the most frequently used, but their presence in a project is associated with increased chances that a project's code will be forked (i.e., used by others) and an increased number of forks. Conclusions: This preliminary study of folder use suggests opportunities to quantify (and improve) file organization practices based on folder use patterns of large collections of repositories.", "title": "" }, { "docid": "77be4363f9080eb8a3b73c9237becca4", "text": "Aim: The purpose of this paper is to present findings of an integrative literature review related to employees’ motivational practices in organizations. Method: A broad search of computerized databases focusing on articles published in English during 1999– 2010 was completed. Extensive screening sought to determine current literature themes and empirical research evidence completed in employees’ focused specifically on motivation in organization. Results: 40 articles are included in this integrative literature review. The literature focuses on how job characteristics, employee characteristic, management practices and broader environmental factors influence employees’ motivation. Research that links employee’s motivation is both based on qualitative and quantitative studies. Conclusion: This literature reveals widespread support of motivation concepts in organizations. Theoretical and editorial literature confirms motivation concepts are central to employees. Job characteristics, management practices, employee characteristics and broader environmental factors are the key variables influence employees’ motivation in organization.", "title": "" }, { "docid": "d669dfcdc2486314bd7234e1f42357de", "text": "The Luneburg lens (LL) represents a very attractive candidate for many applications such as multibeam antennas, multifrequency scanning, and spatial scanning, due to its focusing properties. Indeed, it is a dielectric sphere on which each surface point is a frequency-independent perfect focusing point. This is produced by its index governing law n, which follows the radial distribution n/sup 2/=2-r/sup 2/, where r is the normalized radial position. Practically, an LL is manufactured as a finite number of concentric homogeneous dielectric shells - this is called a discrete LL. The inaccuracies in the curved shell manufacturing process produce intershell air gaps, which degrade the performance of the lens. Furthermore, this requires different materials whose relative dielectric constant covers the range 1-2. The paper proposes a new LL manufacturing process to avoid these drawbacks. The paper describe the theoretical background and the performance of the obtained lens.", "title": "" }, { "docid": "645a1d50394e9cf746e88398ca867ad2", "text": "In this paper, we conduct a comparative analysis of two associative memory-based pattern recognition algorithms. We compare the established Hopfield network algorithm with our novel Distributed Hierarchical Graph Neuron (DHGN) algorithm. The computational complexity and recall efficiency aspects of these algorithms are discussed. The results show that DHGN offers lower computational complexity with better recall efficiency compared to the Hopfield network.", "title": "" }, { "docid": "f8410df5746c3271cd5b495b91a1c316", "text": "Cognitive control supports flexible behavior by selecting actions that are consistent with our goals and appropriate for our environment. The prefrontal cortex (PFC) has an established role in cognitive control, and research on the functional organization of PFC promises to contribute to our understanding of the architecture of control. A recently popular hypothesis is that the rostro-caudal axis of PFC supports a control hierarchy whereby posterior-to-anterior PFC mediates progressively abstract, higher-order control. This review discusses evidence for a rostro-caudal gradient of function in PFC and the theories proposed to account for these results, including domain generality in working memory, relational complexity, the temporal organization of behavior and abstract representational hierarchy. Distinctions among these frameworks are considered as a basis for future research.", "title": "" }, { "docid": "a92f788b44411691a8ad5372b2fa4b55", "text": "We study the problem of minimizing the average of a large number of smooth convex functions penalized with a strongly convex regularizer. We propose and analyze a novel primal-dual method (Quartz) which at every iteration samples and updates a random subset of the dual variables, chosen according to an arbitrary distribution. In contrast to typical analysis, we directly bound the decrease of the primal-dual error (in expectation), without the need to first analyze the dual error. Depending on the choice of the sampling, we obtain efficient serial and mini-batch variants of the method. In the serial case, our bounds match the best known bounds for SDCA (both with uniform and importance sampling). With standard mini-batching, our bounds predict initial data-independent speedup as well as additional data-driven speedup which depends on spectral and sparsity properties of the data.", "title": "" }, { "docid": "328abff1a187a71fe77ce078e9f1647b", "text": "A convenient way of analysing Riemannian manifolds is to embed them in Euclidean spaces, with the embedding typically obtained by flattening the manifold via tangent spaces. This general approach is not free of drawbacks. For example, only distances between points to the tangent pole are equal to true geodesic distances. This is restrictive and may lead to inaccurate modelling. Instead of using tangent spaces, we propose embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel. We furthermore propose to recast a locality preserving projection technique from Euclidean spaces to Riemannian manifolds, in order to demonstrate the benefits of the embedding. Experiments on several visual classification tasks (gesture recognition, person re-identification and texture classification) show that in comparison to tangent-based processing and state-of-the-art methods (such as tensor canonical correlation analysis), the proposed approach obtains considerable improvements in discrimination accuracy.", "title": "" }, { "docid": "176a982a60e302dcdd50484562dec7ce", "text": "The palatine aponeurosis is a thin, fibrous lamella comprising the extended tendons of the tensor veli palatini muscles, attached to the posterior border and inferior surface of the palatine bone. In dentistry, the relationship between the “vibrating line” and the border of the hard and soft palate has long been discussed. However, to our knowledge, there has been no discussion of the relationship between the palatine aponeurosis and the vibrating line(s). Twenty sides from ten fresh frozen White cadaveric heads (seven males and three females) whose mean age at death was 79 years) were used in this study. The thickness of the mucosa including the submucosal tissue was measured. The maximum length of the palatine aponeurosis on each side and the distance from the posterior nasal spine to the posterior border of the palatine aponeurosis in the midline were also measured. The relationship between the marked borderlines and the posterior border of the palatine bone was observed. The thickness of the mucosa and submucosal tissue on the posterior nasal spine and the maximum length of the palatine aponeurosis were 3.4 mm, and 12.2 mm on right side and 12.8 mm on left, respectively. The length of the palatine aponeurosis in the midline was 4.9 mm. In all specimens, the borderline between the compressible and incompressible parts corresponded to the posterior border of the palatine bone.", "title": "" } ]
scidocsrr
40c55e64dc53b26c13209a12c8faa3a9
SAR Image Classification via Deep Recurrent Encoding Neural Networks
[ { "docid": "751b2a0e7b39e005d1664b302f84b08d", "text": "The classification of a synthetic aperture radar (SAR) image is a significant yet challenging task, due to the presence of speckle noises and the absence of effective feature representation. Inspired by deep learning technology, a novel deep supervised and contractive neural network (DSCNN) for SAR image classification is proposed to overcome these problems. In order to extract spatial features, a multiscale patch-based feature extraction model that consists of gray level-gradient co-occurrence matrix, Gabor, and histogram of oriented gradient descriptors is developed to obtain primitive features from the SAR image. Then, to get discriminative representation of initial features, the DSCNN network that comprises four layers of supervised and contractive autoencoders is proposed to optimize features for classification. The supervised penalty of the DSCNN can capture the relevant information between features and labels, and the contractive restriction aims to enhance the locally invariant and robustness of the encoding representation. Consequently, the DSCNN is able to produce effective representation of sample features and provide superb predictions of the class labels. Moreover, to restrain the influence of speckle noises, a graph-cut-based spatial regularization is adopted after classification to suppress misclassified pixels and smooth the results. Experiments on three SAR data sets demonstrate that the proposed method is able to yield superior classification performance compared with some related approaches.", "title": "" }, { "docid": "d7fd9c273c0b26a309b84e0d99143557", "text": "Remote sensing is one of the most common ways to extract relevant information about Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR), and material content (multispectral and hyperspectral) of the objects in the image. Once considered together their complementarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion), among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the data fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications, and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges?", "title": "" }, { "docid": "110f9936d045b4112954783dfb9c22fb", "text": "When exploited in remote sensing analysis, a reliable change rule with transfer ability can detect changes accurately and be applied widely. However, in practice, the complexity of land cover changes makes it difficult to use only one change rule or change feature learned from a given multi-temporal dataset to detect any other new target images without applying other learning processes. In this study, we consider the design of an efficient change rule having transferability to detect both binary and multi-class changes. The proposed method relies on an improved Long Short-Term Memory (LSTM) model to acquire and record the change information of long-term sequence remote sensing data. In particular, a core memory cell is utilized to learn the change rule from the information concerning binary changes or multi-class changes. Three gates are utilized to control the input, output and update of the LSTM model for optimization. In addition, the learned rule can be applied to detect changes and transfer the change rule from one learned image to another new target multi-temporal image. In this study, binary experiments, transfer experiments and multi-class change experiments are exploited to demonstrate the superiority of our method. Three contributions of this work can be summarized as follows: (1) the proposed method can learn an effective change rule to provide reliable change information for multi-temporal images; (2) the learned change rule has good transferability for detecting changes in new target images without any extra learning process, and the new target images should have a multi-spectral distribution similar to that of the training images; and (3) to the authors’ best knowledge, this is the first time that deep learning in recurrent neural networks is exploited for change detection. In addition, under the framework of the proposed method, changes can be detected under both binary detection and multi-class change detection.", "title": "" } ]
[ { "docid": "59776cc8a1ab1d1ac86034c98760b7cf", "text": "The problems encountered by students in first year computer programming units are a common concern in many universities including Victoria University. A fundamental component of a computer science curriculum, computer programming is a mandatory unit in a computing course. It is also one of the most feared and hated units by many novice computing students who, having failed or performed poorly in a programming unit, often drop out from a course. This article discusses some of the difficulties experienced by first year programming students, and reviews some of the initiatives undertaken to counter the problems. The article also reports on the first stage of a current research project at Victoria University that aims to develop a balanced approach to teaching first year programming units; its goal is to ‘befriend’ computer programming to help promote success among new programming students.", "title": "" }, { "docid": "a2a09c544172a3212ccc6d7a7ea7ac43", "text": "Extending semantic parsing systems to new domains and languages is a highly expensive, time-consuming process, so making effective use of existing resources is critical. In this paper, we describe a transfer learning method using crosslingual word embeddings in a sequence-tosequence model. On the NLmaps corpus, our approach achieves state-of-the-art accuracy of 85.7% for English. Most importantly, we observed a consistent improvement for German compared with several baseline domain adaptation techniques. As a by-product of this approach, our models that are trained on a combination of English and German utterances perform reasonably well on codeswitching utterances which contain a mixture of English and German, even though the training data does not contain any code-switching. As far as we know, this is the first study of code-switching in semantic parsing. We manually constructed the set of code-switching test utterances for the NLmaps corpus and achieve 78.3% accuracy on this dataset.", "title": "" }, { "docid": "383b029f9c10186a163f48c01e1ef857", "text": "Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.", "title": "" }, { "docid": "5ce8a143ccb977917df41b93de16aa40", "text": "The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. We characterize a family of non-convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an ε-approximate solution within O(1/ε) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of “zeroorder optimization”, and devise a variant of our algorithm which converges at rate of O(d/ε).", "title": "" }, { "docid": "6a5b587073c46cc584fc01c4f3519fab", "text": "Baggage inspection using X-ray screening is a priority task that reduces the risk of crime and terrorist attacks. Manual detection of threat items is tedious because very few bags actually contain threat items and the process requires a high degree of concentration. An automated solution would be a welcome development in this field. We propose a methodology for automatic detection of threat objects using single X-ray images. Our approach is an adaptation of a methodology originally created for recognizing objects in photographs based on implicit shape models. Our detection method uses a visual vocabulary and an occurrence structure generated from a training dataset that contains representative X-ray images of the threat object to be detected. Our method can be applied to single views of grayscale X-ray images obtained using a single energy acquisition system. We tested the effectiveness of our method for the detection of three different threat objects: 1) razor blades; 2) shuriken (ninja stars); and 3) handguns. The testing dataset for each threat object consisted of 200 X-ray images of bags. The true positive and false positive rates (TPR and FPR) are: (0.99 and 0.02) for razor blades, (0.97 and 0.06) for shuriken, and (0.89 and 0.18) for handguns. If other representative training datasets were utilized, we believe that our methodology could aid in the detection of other kinds of threat objects.", "title": "" }, { "docid": "1b5b6c4a82436b6dcbf984a199c68b5d", "text": "Online fashion sales present a challenging use case for personalized recommendation: Stores offer a huge variety of items in multiple sizes. Small stocks, high return rates, seasonality, and changing trends cause continuous turnover of articles for sale on all time scales. Customers tend to shop rarely, but often buy multiple items at once. We report on backtest experiments with sales data of 100k frequent shoppers at Zalando, Europe’s leading online fashion platform. To model changing customer and store environments, our recommendation method employs a pair of neural networks: To overcome the cold start problem, a feedforward network generates article embeddings in “fashion space,” which serve as input to a recurrent neural network that predicts a style vector in this space for each client, based on their past purchase sequence. We compare our results with a static collaborative filtering approach, and a popularity ranking baseline.", "title": "" }, { "docid": "aaf075f849b4e61f57aa2451cdccad70", "text": "The spatial relation between mitochondria and endoplasmic reticulum (ER) in living HeLa cells was analyzed at high resolution in three dimensions with two differently colored, specifically targeted green fluorescent proteins. Numerous close contacts were observed between these organelles, and mitochondria in situ formed a largely interconnected, dynamic network. A Ca2+-sensitive photoprotein targeted to the outer face of the inner mitochondrial membrane showed that, upon opening of the inositol 1,4,5-triphosphate (IP3)-gated channels of the ER, the mitochondrial surface was exposed to a higher concentration of Ca2+ than was the bulk cytosol. These results emphasize the importance of cell architecture and the distribution of organelles in regulation of Ca2+ signaling.", "title": "" }, { "docid": "e2132912c7e715f464f3d7f2599c2644", "text": "Data mining technology is applied to fraud detection to establish the fraud detection model, describe the process of creating the fraud detection model, then establish data model with ID3 decision tree, and establish example of fraud detection model by using this model. As e-commerce sales continue to grow, the associated online fraud remains an attractive source of revenue for fraudsters. These fraudulent activities impose a considerable financial loss to merchants, making online fraud detection a necessity. The problem of fraud detection is concerned with not only capturing the fraudulent activities, but also capturing them as quickly as possible. This timeliness is crucial to decrease financial losses.", "title": "" }, { "docid": "50d2ab388c7bf28d4849bb51295cad36", "text": "In the near future, container ports will no longer be able to expand into the surrounding land and will thus be unable to meet the storage requirements due to the boom in world trade. A solution to this problem is to increase the container throughput of the port by reducing the amount of time necessary to load and unload a ship. This paper presents distributed agent architecture to achieve this task. Under such architecture, an intelligent planning algorithm is continuously optimised by the dynamic and co-operative rescheduling of yard resources such as quay cranes and", "title": "" }, { "docid": "87f7c3cfe6ca262e1f8716bf8ee16d2b", "text": "Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.", "title": "" }, { "docid": "b68680f47f1d9b45e30262ab45f0027b", "text": "Brain-computer interface (BCI) systems create a novel communication channel from the brain to an output device by bypassing conventional motor output pathways of nerves and muscles. Therefore they could provide a new communication and control option for paralyzed patients. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. Here we present a novel technique that allows the simultaneous optimization of a spatial and a spectral filter enhancing discriminability rates of multichannel EEG single-trials. The evaluation of 60 experiments involving 22 different subjects demonstrates the significant superiority of the proposed algorithm over to its classical counterpart: the median classification error rate was decreased by 11%. Apart from the enhanced classification, the spatial and/or the spectral filter that are determined by the algorithm can also be used for further analysis of the data, e.g., for source localization of the respective brain rhythms", "title": "" }, { "docid": "f5e6df40898a5b84f8e39784f9b56788", "text": "OBJECTIVE\nTo determine the prevalence of anxiety and depression among medical students at Nishtar Medical College, Multan.\n\n\nMETHODS\nA cross-sectional study was carried out at Nishtar Medical College, Multan in 2008. The questionnaire was administered to 815 medical students who had spent more than 6 months in college and had no self reported physical illness. They were present at the time of distribution of the questionnaires and consented. Prevalence of anxiety and depression was assessed using a structured validated questionnaire, the Aga Khan University Anxiety and Depression Scale with a cut-off score of 19. Data Analysis was done using SPSS v. 14.\n\n\nRESULTS\nOut of 815 students, 482 completed the questionnaire with a response rate of 59.14%. The mean age of students was 20.66 +/- 1.8 years. A high prevalence of anxiety and depression (43.89%) was found amongst medical students. Prevalence of anxiety and depression among students of first, second, third, fourth and final years was 45.86%, 52.58%, 47.14%, 28.75% and 45.10% respectively. Female students were found to be more depressed than male students (OR = 2.05, 95% CI = 1.42-2.95, p = 0.0001). There was a significant association between the prevalence of anxiety and depression and the respective year of medical college (p = 0.0276). It was seen that age, marital status, locality and total family income did not significantly affect the prevalence of anxiety and depression.\n\n\nCONCLUSIONS\nThe results showed that medical students constitute a vulnerable group that has a high prevalence of psychiatric morbidity comprising of anxiety and depression.", "title": "" }, { "docid": "ca39fa1610ae59ba7b0d6d3acabd2149", "text": "Although procedural modeling of cities has attracted a lot of attention for the past decade, populating arbitrary landscapes with non-urban settlements remains an open problem. In this work, we focus on the modeling of small, European villages that took benefit of terrain features to settle in safe, sunny or simply convenient places. We introduce a three step procedural generation method. First, an iterative process based on interest maps is used to progressively generate settlement seeds and the roads that connect them. The fact that a new road attracts settlers while a new house often leads to some extension of the road network is taken into account. Then, an anisotropic conquest method is introduced to segment the land into parcels around settlement seeds. Finally, we introduce open shape grammar to generate 3D geometry that adapts to the local slope. We demonstrate the effectiveness of our method by generating different kinds of village on arbitrary terrains, from a mountain hamlet to a fisherman village, and validate through comparison with real data.", "title": "" }, { "docid": "feb9d8849bfc663da750718870d1bb93", "text": "Private data in healthcare system require confidentiality protection while transmitting. Steganography is the art of concealing data into a cover media for conveying messages confidentially. In this paper, we propose a steganographic method which can provide private data in medical system with very secure protection. In our method, a cover image is first mapped into a 1D pixels sequence by Hilbert filling curve and then divided into non-overlapping embedding units with three consecutive pixels. We use adaptive pixel pair match (APPM) method to embed digits in the pixel value differences (PVD) of the three pixels and the base of embedded digits is dependent on the differences among the three pixels. By solving an optimization problem, minimal distortion of the pixel ternaries caused by data embedding can be obtained. The experimental results show our method is more suitable to privacy protection of healthcare system than prior steganographic works.", "title": "" }, { "docid": "b34db00c8a84eab1c7b1a6458fc6cd97", "text": "The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of humancomputer interaction. Index Terms —Vision-based gesture recognition, gesture analysis, hand tracking, nonrigid motion analysis, human-computer", "title": "" }, { "docid": "ce2e7b853c967a4c0980ae2ded36890a", "text": "In this paper, we use data augmentation to improve performance of deep neural network (DNN) embeddings for speaker recognition. The DNN, which is trained to discriminate between speakers, maps variable-length utterances to fixed-dimensional embeddings that we call x-vectors. Prior studies have found that embeddings leverage large-scale training datasets better than i-vectors. However, it can be challenging to collect substantial quantities of labeled data for training. We use data augmentation, consisting of added noise and reverberation, as an inexpensive method to multiply the amount of training data and improve robustness. The x-vectors are compared with i-vector baselines on Speakers in the Wild and NIST SRE 2016 Cantonese. We find that while augmentation is beneficial in the PLDA classifier, it is not helpful in the i-vector extractor. However, the x-vector DNN effectively exploits data augmentation, due to its supervised training. As a result, the x-vectors achieve superior performance on the evaluation datasets.", "title": "" }, { "docid": "645f49ff21d31bb99cce9f05449df0d7", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "dcab5c32a037ac31f8a541458a2d72a6", "text": "To determine the 3D orientation and 3D location of objects in the surroundings of a camera mounted on a robot or mobile device, we developed two powerful algorithms in object detection and temporal tracking that are combined seamlessly for robotic perception and interaction as well as Augmented Reality (AR). A separate evaluation of, respectively, the object detection and the temporal tracker demonstrates the important stride in research as well as the impact on industrial robotic applications and AR. When evaluated on a standard dataset, the detector produced the highest f1score with a large margin while the tracker generated the best accuracy at a very low latency of approximately 2 ms per frame with one CPU core – both algorithms outperforming the state of the art. When combined, we achieve a powerful framework that is robust to handle multiple instances of the same object under occlusion and clutter while attaining real-time performance. Aiming at stepping beyond the simple scenarios used by current systems, often constrained by having a single object in absence of clutter, averting to touch the object to prevent close-range partial occlusion, selecting brightly colored objects to easily segment them individually or assuming that the object has simple geometric structure, we demonstrate the capacity to handle challenging cases under clutter, partial occlusion and varying lighting conditions with objects of different shapes and sizes.", "title": "" }, { "docid": "f91e1638e4812726ccf96f410da2624b", "text": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and -greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.", "title": "" }, { "docid": "3841968dc54370cab167837bf70f3eef", "text": "Task scheduling plays a key role in cloud computing systems. Scheduling of tasks cannot be done on the basis of single criteria but under a lot of rules and regulations that we can term as an agreement between users and providers of cloud. This agreement is nothing but the quality of service that the user wants from the providers. Providing good quality of services to the users according to the agreement is a decisive task for the providers as at the same time there are a large number of tasks running at the provider’s side. The task scheduling problem can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources (processors/computer machines) so that we can achieve the desired goals for tasks. In this paper we are performing comparative study of the different algorithms for their suitability, feasibility, adaptability in the context of cloud scenario, after that we try to propose the hybrid approach that can be adopted to enhance the existing platform further. So that it can facilitate cloud-providers to provide better quality of services. Keywords— Cloud Computing, Cloud Architecture, Task Scheduling, Scheduling Types, GA, PSO", "title": "" } ]
scidocsrr
98514141a65b935341cc855823869b5d
Attachment security in couple relationships: a systemic model and its implications for family dynamics.
[ { "docid": "c1b955d77936e641f2ac05cb57fa91ed", "text": "A theoretical model describing interpersonal trust in close relationships is presented. Three dimensions of trust are identified, based on the type of attributions drawn about a partner's motives. These dimensions are also characterized by a developmental progression in the relationship. The validity of this theoretical perspective was examined through evidence obtained from a survey of a heterogeneous sample of established couples. An analysis of the Trust Scale in this sample was consistent with the notion that the predictability, dependability, and faith components represent distinct and coherent dimensions. A scale to measure interpersonal motives was also developed. The perception of intrinsic motives in a partner emerged as a dimension, as did instrumental and extrinsic motives. As expected, love and happiness were closely tied to feelings of faith and the attribution of intrinsic motivation to both self and partner. Women appeared to have more integrated, complex views of their relationships than men: All three forms of trust were strongly related and attributions of instrumental motives in their partners seemed to be self-affirming. Finally, there was a tendency for people to view their own motives as less self-centered and more exclusively intrinsic in flavor than their partner's motives.", "title": "" }, { "docid": "c90ab409ea2a9726f6ddded45e0fdea9", "text": "About a decade ago, the Adult Attachment Interview (AAI; C. George, N. Kaplan, & M. Main, 1985) was developed to explore parents' mental representations of attachment as manifested in language during discourse of childhood experiences. The AAI was intended to predict the quality of the infant-parent attachment relationship, as observed in the Ainsworth Strange Situation, and to predict parents' responsiveness to their infants' attachment signals. The current meta-analysis examined the available evidence with respect to these predictive validity issues. In regard to the 1st issue, the 18 available samples (N = 854) showed a combined effect size of 1.06 in the expected direction for the secure vs. insecure split. For a portion of the studies, the percentage of correspondence between parents' mental representation of attachment and infants' attachment security could be computed (the resulting percentage was 75%; kappa = .49, n = 661). Concerning the 2nd issue, the 10 samples (N = 389) that were retrieved showed a combined effect size of .72 in the expected direction. According to conventional criteria, the effect sizes are large. It was concluded that although the predictive validity of the AAI is a replicated fact, there is only partial knowledge of how attachment representations are transmitted (the transmission gap).", "title": "" } ]
[ { "docid": "5d1f3dbce3f5d33b4d0b251da060cab6", "text": "Cyber-Physical Systems (CPS) is an exciting emerging research area that has drawn the attention of many researchers. Although the question of \"What is a CPS?\" remains open, widely recognized and accepted attributes of a CPS include timeliness, distributed, reliability, fault-tolerance, security, scalability and autonomous. In this paper, a CPS definition is given and a prototype architecture is proposed. It is argued that this architecture captures the essential attributes of a CPS and lead to identification of many research challenges.", "title": "" }, { "docid": "dc83a0826e509d9d4be6b4b58550b20e", "text": "This review describes historical iodine deficiency in the U.K., gives current information on dietary sources of iodine and summarises recent evidence of iodine deficiency and its association with child neurodevelopment. Iodine is required for the production of thyroid hormones that are needed for brain development, particularly during pregnancy. Iodine deficiency is a leading cause of preventable brain damage worldwide and is associated with impaired cognitive function. Despite a global focus on the elimination of iodine deficiency, iodine is a largely overlooked nutrient in the U.K., a situation we have endeavoured to address through a series of studies. Although the U.K. has been considered iodine-sufficient for many years, there is now concern that iodine deficiency may be prevalent, particularly in pregnant women and women of childbearing age; indeed we found mild-to-moderate iodine deficiency in pregnant women in Surrey. As the major dietary source of iodine in the U.K. is milk and dairy produce, it is relevant to note that we have found the iodine concentration of organic milk to be over 40% lower than that of conventional milk. In contrast to many countries, iodised table salt is unlikely to contribute to U.K. iodine intake as we have shown that its availability is low in grocery stores. This situation is of concern as the level of U.K. iodine deficiency is such that it is associated with adverse effects on offspring neurological development; we demonstrated a higher risk of low IQ and poorer reading-accuracy scores in U.K. children born to mothers who were iodine-deficient during pregnancy. Given our findings and those of others, iodine status in the U.K. population should be monitored, particularly in vulnerable subgroups such as pregnant women and children.", "title": "" }, { "docid": "b3d915b4ff4d86b8c987b760fcf7d525", "text": "We examine how exercising control over a technology platform can increase profits and innovation. Benefits depend on using a platform as a governance mechanism to influence ecosystem parters. Results can inform innovation strategy, antitrust and intellectual property law, and management of competition.", "title": "" }, { "docid": "25a7f23c146add12bfab3f1fc497a065", "text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).", "title": "" }, { "docid": "6b7f2b7e528ee530822ff5bbb371645d", "text": "Automatically generating video captions with natural language remains a challenge for both the field of nature language processing and computer vision. Recurrent Neural Networks (RNNs), which models sequence dynamics, has proved to be effective in visual interpretation. Based on a recent sequence to sequence model for video captioning, which is designed to learn the temporal structure of the sequence of frames and the sequence model of the generated sentences with RNNs, we investigate how pretrained language model and attentional mechanism can aid the generation of natural language descriptions of videos. We evaluate our improvements on the Microsoft Video Description Corpus (MSVD) dataset, which is a standard dataset for this task. The results demonstrate that our approach outperforms original sequence to sequence model and achieves state-of-art baselines. We further run our model one a much harder Montreal Video Annotation Dataset (M-VAD), where the model also shows promising results.", "title": "" }, { "docid": "97ef62d13180ee6bb44ec28ff3b3d53e", "text": "Glioblastoma tumour cells release microvesicles (exosomes) containing mRNA, miRNA and angiogenic proteins. These microvesicles are taken up by normal host cells, such as brain microvascular endothelial cells. By incorporating an mRNA for a reporter protein into these microvesicles, we demonstrate that messages delivered by microvesicles are translated by recipient cells. These microvesicles are also enriched in angiogenic proteins and stimulate tubule formation by endothelial cells. Tumour-derived microvesicles therefore serve as a means of delivering genetic information and proteins to recipient cells in the tumour environment. Glioblastoma microvesicles also stimulated proliferation of a human glioma cell line, indicating a self-promoting aspect. Messenger RNA mutant/variants and miRNAs characteristic of gliomas could be detected in serum microvesicles of glioblastoma patients. The tumour-specific EGFRvIII was detected in serum microvesicles from 7 out of 25 glioblastoma patients. Thus, tumour-derived microvesicles may provide diagnostic information and aid in therapeutic decisions for cancer patients through a blood test.", "title": "" }, { "docid": "78283b148e6340ef9c49e503f9f39a2e", "text": "Blur in facial images significantly impedes the efficiency of recognition approaches. However, most existing blind deconvolution methods cannot generate satisfactory results due to their dependence on strong edges, which are sufficient in natural images but not in facial images. In this paper, we represent point spread functions (PSFs) by the linear combination of a set of pre-defined orthogonal PSFs, and similarly, an estimated intrinsic (EI) sharp face image is represented by the linear combination of a set of pre-defined orthogonal face images. In doing so, PSF and EI estimation is simplified to discovering two sets of linear combination coefficients, which are simultaneously found by our proposed coupled learning algorithm. To make our method robust to different types of blurry face images, we generate several candidate PSFs and EIs for a test image, and then, a non-blind deconvolution method is adopted to generate more EIs by those candidate PSFs. Finally, we deploy a blind image quality assessment metric to automatically select the optimal EI. Thorough experiments on the facial recognition technology database, extended Yale face database B, CMU pose, illumination, and expression (PIE) database, and face recognition grand challenge database version 2.0 demonstrate that the proposed approach effectively restores intrinsic sharp face images and, consequently, improves the performance of face recognition.", "title": "" }, { "docid": "ba4260598a634bcfdfb7423182c4c8b6", "text": "A wide range of computational methods and tools for data analysis are available. In this study we took advantage of those available technological advancements to develop prediction models for the prediction of a Type-2 Diabetic Patient. We aim to investigate how the diabetes incidents are affected by patients’ characteristics and measurements. Efficient predictive modeling is required for medical researchers and practitioners. This study proposes Hybrid Prediction Model (HPM) which uses Simple K-means clustering algorithm aimed at validating chosen class label of given data (incorrectly classified instances are removed, i.e. pattern extracted from original data) and subsequently applying the classification algorithm to the result set. C4.5 algorithm is used to build the final classifier model by using the k-fold cross-validation method. The Pima Indians diabetes data was obtained from the University of California at Irvine (UCI) machine learning repository datasets. A wide range of different classification methods have been applied previously by various researchers in order to find the best performing algorithm on this dataset. The accuracies achieved have been in the range of 59.4–84.05%. However the proposed HPM obtained a classification accuracy of 92.38%. In order to evaluate the performance of the proposed method, sensitivity and specificity performance measures that are used commonly in medical classification studies were used. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "21e35c773ac9b9300f6df44854fcd141", "text": "Time is a fundamental domain of experience. In this paper we ask whether aspects of language and culture affect how people think about this domain. Specifically, we consider whether English and Mandarin speakers think about time differently. We review all of the available evidence both for and against this hypothesis, and report new data that further support and refine it. The results demonstrate that English and Mandarin speakers do think about time differently. As predicted by patterns in language, Mandarin speakers are more likely than English speakers to think about time vertically (with earlier time-points above and later time-points below).", "title": "" }, { "docid": "06c8d56ecc9e92b106de01ad22c5a125", "text": "Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable.", "title": "" }, { "docid": "5b07f0dbf40fb302d04cb7a880d9f67f", "text": "The current study investigated whether long-term experience in music or a second language is associated with enhanced cognitive functioning. Early studies suggested the possibility of a cognitive advantage from musical training and bilingualism but have failed to be replicated by recent findings. Further, each form of expertise has been independently investigated leaving it unclear whether any benefits are specifically caused by each skill or are a result of skill learning in general. To assess whether cognitive benefits from training exist, and how unique they are to each training domain, the current study compared musicians and bilinguals to each other, plus to individuals who had expertise in both skills, or neither. Young adults (n = 153) were categorized into one of four groups: monolingual musician; bilingual musician; bilingual non-musician; and monolingual non-musician. Multiple tasks per cognitive ability were used to examine the coherency of any training effects. Results revealed that musically trained individuals, but not bilinguals, had enhanced working memory. Neither skill had enhanced inhibitory control. The findings confirm previous associations between musicians and improved cognition and extend existing evidence to show that benefits are narrower than expected but can be uniquely attributed to music compared to another specialized auditory skill domain. The null bilingual effect despite a music effect in the same group of individuals challenges the proposition that young adults are at a performance ceiling and adds to increasing evidence on the lack of a bilingual advantage on cognition.", "title": "" }, { "docid": "348c62670a729da42654f0cf685bba53", "text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.", "title": "" }, { "docid": "0f0828cb4b218345fad2304bb3743851", "text": "A fully dynamic SAR ADC is proposed that uses passive charge-sharing and an asynchronous controller to achieve low power consumption. No active circuits are needed for high-speed operation and all static power is removed, offering power consumption proportional to sampling frequency from 50MS/s down to 0. The prototype implementation in 90nm digital CMOS achieves 7.8 ENOB, 49dB SNDR at 20MS/s consuming 290 muW. This results in a FOM of 65fJ/conversion-step.", "title": "" }, { "docid": "4afa269cb8ff0fb4b90f3fe5ddcd0675", "text": "Sleep specialists often conduct manual sleep stage scoring by visually inspecting the patient’s neurophysiological signals collected at sleep labs. This is, generally, a very difficult, tedious and time-consuming task. The limitations of manual sleep stage scoring have escalated the demand for developing Automatic Sleep Stage Classification (ASSC) systems. Sleep stage classification refers to identifying the various stages of sleep and is a critical step in an effort to assist physicians in the diagnosis and treatment of related sleep disorders. The aim of this paper is to survey the progress and challenges in various existing Electroencephalogram (EEG) signal-based methods used for sleep stage identification at each phase; including pre-processing, feature extraction and classification; in an attempt to find the research gaps and possibly introduce a reasonable solution. Many of the prior and current related studies use multiple EEG channels, and are based on 30 s or 20 s epoch lengths which affect the feasibility and speed of ASSC for real-time applications. Thus, in this paper, we also present a novel and efficient technique that can be implemented in an embedded hardware device to identify sleep stages using new statistical features applied to 10 s epochs of single-channel EEG signals. In this study, the PhysioNet Sleep European Data Format (EDF) Database was used. The proposed methodology achieves an average classification sensitivity, specificity and accuracy of 89.06%, 98.61% and 93.13%, respectively, when the decision tree classifier is applied. Finally, our new method is compared with those in recently published studies, which reiterates the high classification accuracy performance.", "title": "" }, { "docid": "896fe681f79ef025a6058a51dd4f19c0", "text": "Semantic parsing is the construction of a complete, formal, symbolic meaning representation of a sentence. While it is crucial to natural language understanding, the problem of semantic parsing has received relatively little attention from the machine learning community. Recent work on natural language understanding has mainly focused on shallow semantic analysis, such as word-sense disambiguation and semantic role labeling. Semantic parsing, on the other hand, involves deep semantic analysis in which word senses, semantic roles and other components are combined to produce useful meaning representations for a particular application domain (e.g. database query). Prior research in machine learning for semantic parsing is mainly based on inductive logic programming or deterministic parsing, which lack some of the robustness that characterizes statistical learning. Existing statistical approaches to semantic parsing, however, are mostly concerned with relatively simple application domains in which a meaning representation is no more than a single semantic frame. In this proposal, we present a novel statistical approach to semantic parsing, WASP, which can handle meaning representations with a nested structure. The WASP algorithm learns a semantic parser given a set of sentences annotated with their correct meaning representations. The parsing model is based on the synchronous context-free grammar, where each rule maps a natural-language substring to its meaning representation. The main innovation of the algorithm is its use of state-of-the-art statistical machine translation techniques. A statistical word alignment model is used for lexical acquisition, and the parsing model itself can be seen as an instance of a syntax-based translation model. In initial evaluation on several real-world data sets, we show that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order. In future work, we intend to pursue several directions in developing accurate semantic parsers for a variety of application domains. This will involve exploiting prior knowledge about the natural-language syntax and the application domain. We also plan to construct a syntax-aware word-based alignment model for lexical acquisition. Finally, we will generalize the learning algorithm to handle contextdependent sentences and accept noisy training data.", "title": "" }, { "docid": "914a780f253dd4ec619fac848e88b4ee", "text": "In the first part of the paper, we modeled and characterized the underwater radio channel in shallowwaters. In the second part,we analyze the application requirements for an underwaterwireless sensor network (U-WSN) operating in the same environment and perform detailed simulations. We consider two localization applications, namely self-localization and navigation aid, and propose algorithms that work well under the specific constraints associated with U-WSN, namely low connectivity, low data rates and high packet loss probability. We propose an algorithm where the sensor nodes collaboratively estimate their unknown positions in the network using a low number of anchor nodes and distance measurements from the underwater channel. Once the network has been self-located, we consider a node estimating its position for underwater navigation communicating with neighboring nodes. We also propose a communication system and simulate the whole electromagnetic U-WSN in the Castalia simulator to evaluate the network performance, including propagation impairments (e.g., noise, interference), radio parameters (e.g., modulation scheme, bandwidth, transmit power), hardware limitations (e.g., clock drift, transmission buffer) and complete MAC and routing protocols. We also explain the changes that have to be done to Castalia in order to perform the simulations. In addition, we propose a parametric model of the communication channel that matches well with the results from the first part of this paper. Finally, we provide simulation results for some illustrative scenarios.", "title": "" }, { "docid": "fc77cdf4712d15d21a787602fca94470", "text": "In this paper we present a Quantified SWOT (Strengths, Weaknesses, Opportunities and Threats) analytical method which provides more detailed and quantified data for SWOT analysis. The Quantified SWOT analytical method adopts the concept of Multiple-Attribute Decision Making (MADM), which uses a multi-layer scheme to simplify complicated problems, and thus is able to perform SWOT analysis on several enterprises simultaneously. Container ports in East Asia are taken as a case study in this paper. Quantified SWOT analysis is used to assess the competing strength of each port and then suggest an adoptable competing strategy for each. c © 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "afa4acca5438cc54dc4e584f79ec9b97", "text": "We study dynamic pricing policies for a monopolist selling perishable products over a finite time horizon to strategic buyers. Buyers are strategic in the sense that they anticipate the firm’s price policies. It is expensive and administratively difficult for most brick and mortar retailers to change prices, placing limits on the number of price changes and the types of pricing policies they can adopt. The simplest policy is to commit to a set of price changes. A more complex alternative is to let the price depend on sales history. We investigate two pricing schemes that we call posted and contingent pricing. Using the posted pricing scheme, the firm announces a set of prices at the beginning of the horizon. In the contingent pricing scheme, price evolution depends upon demand realization. Our focus is on the posted pricing scheme because of its ease of implementation. Counter to intuition, we find that neither a posted pricing scheme nor a contingent pricing scheme is dominant and the difference in expected revenues of these two schemes is small. Limiting the number of price changes will result in a decrease in expected revenues. We show that a multi-unit auction with a reservation price provides an upper bound for expected revenues for both pricing schemes. Numerical examples suggest that a posted pricing scheme with two or three price changes is enough to achieve revenues that are close to the upper bound. Dynamic pricing is only useful when strategic buyers perceive scarcity. We study the impact of scarcity and derive the optimal stocking levels for large markets. Finally, we investigate whether or not it is optimal for the seller to conceal inventory or sales information from buyers. A firm benefits if it does not reveal the number of units it has available for sale at the beginning of the season, or subsequently withholds information about the number of units sold. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "de761c4e3e79b5b4d056552e0a71a7b6", "text": "A novel multiple-input multiple-output (MIMO) dielectric resonator antenna (DRA) for long term evolution (LTE) femtocell base stations is described. The proposed antenna is able to transmit and receive information independently using TE and HE modes in the LTE bands 12 (698-716 MHz, 728-746 MHz) and 17 (704-716 MHz, 734-746 MHz). A systematic design method based on perturbation theory is proposed to induce mode degeneration for MIMO operation. Through perturbing the boundary of the DRA, the amount of energy stored by a specific mode is changed as well as the resonant frequency of that mode. Hence, by introducing an adequate boundary perturbation, the TE and HE modes of the DRA will resonate at the same frequency and share a common impedance bandwidth. The simulated mutual coupling between the modes was as low as - 40 dB . It was estimated that in a rich scattering environment with an Signal-to-Noise Ratio (SNR) of 20 dB per receiver branch, the proposed MIMO DRA was able to achieve a channel capacity of 11.1 b/s/Hz (as compared to theoretical maximum 2 × 2 capacity of 13.4 b/s/Hz). Our experimental measurements successfully demonstrated the design methodology proposed in this work.", "title": "" } ]
scidocsrr
6a80ac077fdd5a02af9567a309146f62
Botcoin: Monetizing Stolen Cycles
[ { "docid": "bc8b40babfc2f16144cdb75b749e3a90", "text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.", "title": "" } ]
[ { "docid": "24167db00908c65558e8034d94dfb8da", "text": "Due to the wide variety of devices used in computer network systems, cybersecurity plays a major role in securing and improving the performance of the network or system. Although cybersecurity has received a large amount of global interest in recent years, it remains an open research space. Current security solutions in network-based cyberspace provide an open door to attackers by communicating first before authentication, thereby leaving a black hole for an attacker to enter the system before authentication. This article provides an overview of cyberthreats, traditional security solutions, and the advanced security model to overcome current security drawbacks.", "title": "" }, { "docid": "761be34401cc6ef1d8eea56465effca9", "text": "Résumé: Dans cet article, nous proposons une nouvelle approche pour le résumé automatique de textes utilisant un algorithme d'apprentissage numérique spécifique à la tâche d'ordonnancement. L'objectif est d'extraire les phrases d'un document qui sont les plus représentatives de son contenu. Pour se faire, chaque phrase d'un document est représentée par un vecteur de scores de pertinence, où chaque score est un score de similarité entre une requête particulière et la phrase considérée. L'algorithme d'ordonnancement effectue alors une combinaison linéaire de ces scores, avec pour but d'affecter aux phrases pertinentes d'un document des scores supérieurs à ceux des phrases non pertinentes du même document. Les algorithmes d'ordonnancement ont montré leur efficacité en particulier dans le domaine de la méta-recherche, et leur utilisation pour le résumé est motivée par une analogie peut être faite entre la méta-recherche et le résumé automatique qui consiste, dans notre cas, à considérer les similarités des phrases avec les différentes requêtes comme étant des sorties de différents moteurs de recherche. Nous montrons empiriquement que l'algorithme d'ordonnancement a de meilleures performances qu'une approche utilisant un algorithme de classification sur deux corpus distincts.", "title": "" }, { "docid": "0cb237a05e30a4bc419dc374f3a7b55a", "text": "Question-and-answer (Q&A) websites, such as Yahoo! Answers, Stack Overflow and Quora, have become a popular and powerful platform for Web users to share knowledge on a wide range of subjects. This has led to a rapidly growing volume of information and the consequent challenge of readily identifying high quality objects (questions, answers and users) in Q&A sites. Exploring the interdependent relationships among different types of objects can help find high quality objects in Q&A sites more accurately. In this paper, we specifically focus on the ranking problem of co-ranking questions, answers and users in a Q&A website. By studying the tightly connected relationships between Q&A objects, we can gain useful insights toward solving the co-ranking problem. However, co-ranking multiple objects in Q&A sites is a challenging task: a) With the large volumes of data in Q&A sites, it is important to design a model that can scale well; b) The large-scale Q&A data makes extracting supervised information very expensive. In order to address these issues, we propose an unsupervised Network-based Co-Ranking framework (NCR) to rank multiple objects in Q&A sites. Empirical studies on real-world Yahoo! Answers datasets demonstrate the effectiveness and the efficiency of the proposed NCR method.", "title": "" }, { "docid": "e0b8b4c2431b92ff878df197addb4f98", "text": "Malware classification is a critical part of the cybersecurity. Traditional methodologies for the malware classification typically use static analysis and dynamic analysis to identify malware. In this paper, a malware classification methodology based on its binary image and extracting local binary pattern (LBP) features is proposed. First, malware images are reorganized into 3 by 3 grids which are mainly used to extract LBP feature. Second, the LBP is implemented on the malware images to extract features in that it is useful in pattern or texture classification. Finally, Tensorflow, a library for machine learning, is applied to classify malware images with the LBP feature. Performance comparison results among different classifiers with different image descriptors such as GIST, a spatial envelope, and the LBP demonstrate that our proposed approach outperforms others.", "title": "" }, { "docid": "dd1fd4f509e385ea8086a45a4379a8b5", "text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.", "title": "" }, { "docid": "173d791e05859ec4cc28b9649c414c62", "text": "Breast cancer is the most common invasive cancer in females worldwide. It usually presents with a lump in the breast with or without other manifestations. Diagnosis of breast cancer depends on physical examination, mammographic findings and biopsy results. Treatment of breast cancer depends on the stage of the disease. Lines of treatment include mainly surgical removal of the tumor followed by radiotherapy or chemotherapy. Other lines including immunotherapy, thermochemotherapy and alternative medicine may represent a hope for breast cancer", "title": "" }, { "docid": "438a9e517a98c6f98f7c86209e601f1b", "text": "One of the most challenging tasks in large-scale multi-label image retrieval is to map images into binary codes while preserving multilevel semantic similarity. Recently, several deep supervised hashing methods have been proposed to learn hash functions that preserve multilevel semantic similarity with deep convolutional neural networks. However, these triplet label based methods try to preserve the ranking order of images according to their similarity degrees to the queries while not putting direct constraints on the distance between the codes of very similar images. Besides, the current evaluation criteria are not able to measure the performance of existing hashing methods on preserving fine-grained multilevel semantic similarity. To tackle these issues, we propose a novel Deep Multilevel Semantic Similarity Preserving Hashing (DMSSPH) method to learn compact similarity-preserving binary codes for the huge body of multi-label image data with deep convolutional neural networks. In our approach, we make the best of the supervised information in the form of pairwise labels to maximize the discriminability of output binary codes. Extensive evaluations conducted on several benchmark datasets demonstrate that the proposed method significantly outperforms the state-of-the-art supervised and unsupervised hashing methods at the accuracies of top returned images, especially for shorter binary codes. Meanwhile, the proposed method shows better performance on preserving fine-grained multilevel semantic similarity according to the results under the Jaccard coefficient based evaluation criteria we propose.", "title": "" }, { "docid": "335ac6b7770ec7aaf2ec43ac32c1dc9e", "text": "The biodistribution and pharmacokinetics of (111)In-DTPA-labeled pegylated liposomes (IDLPL) were studied in 17 patients with locally advanced cancers. The patients received 65-107 MBq of IDLPL, and nuclear medicine whole body gamma camera imaging was used to study liposome biodistribution. The t(1/2beta) of IDLPL was 76.1 h. Positive tumor images were obtained in 15 of 17 studies (4 of 5 breast, 5 of 5 head and neck, 3 of 4 bronchus, 2 of 2 glioma, and 1 of 1 cervix cancer). The levels of tumor liposome uptake estimated from regions of interest on gamma camera images were approximately 0.5-3.5% of the injected dose at 72 h. The greatest levels of uptake were seen in the patients with head and neck cancers [33.0 +/- 15.8% ID/kg (percentage of injected dose/kg)]. The uptake in the lung tumors was at an intermediate level (18.3 +/- 5.7% ID/kg), and the breast cancers showed relatively low levels of uptake (5.3 +/- 2.6% ID/kg). These liposome uptake values mirrored the estimated tumor volumes of the various tumor types (36.2 +/- 18.0 cm3 for squamous cell cancer of the head and neck, 114.5 +/- 42.0 cm3 for lung tumors, and 234.7 +/- 101.4 cm3 for breast tumors). In addition, significant localization of the liposomes was seen in the tissues of the reticuloendothelial system (liver, spleen, and bone marrow). One patient with extensive mucocutaneous AIDS-related Kaposi sarcoma was also studied according to a modified protocol, and prominent deposition of the radiolabeled liposomes was demonstrated in these lesions. An additional two patients with resectable head and neck cancer received 26 MBq of IDLPL 48 h before undergoing surgical excision of their tumors. Samples of the tumor, adjacent normal mucosa, muscle, fat, skin, and salivary tissue were obtained at operation. The levels of tumor uptake were 8.8 and 15.9% ID/kg, respectively, with tumor uptake exceeding that in normal mucosa by a mean ratio of 2.3:1, in skin by 3.6:1, in salivary gland by 5.6:1, in muscle by 8.3:1, and in fat by 10.8:1. These data strongly support the development of pegylated liposomal agents for the treatment of solid tumors, particularly those of the head and neck.", "title": "" }, { "docid": "77ea0e24066d028d085069cb8f6733e0", "text": "Road scene reconstruction is a fundamental and crucial module at the perception phase for autonomous vehicles, and will influence the later phase, such as object detection, motion planing and path planing. Traditionally, self-driving car uses Lidar, camera or fusion of the two kinds of sensors for sensing the environment. However, single Lidar or camera-based approaches will miss crucial information, and the fusion-based approaches often consume huge computing resources. We firstly propose a conditional Generative Adversarial Networks (cGANs)-based deep learning model that can rebuild rich semantic scene images from upsampled Lidar point clouds only. This makes it possible to remove cameras to reduce resource consumption and improve the processing rate. Simulation on the KITTI dataset also demonstrates that our model can reestablish color imagery from a single Lidar point cloud, and is effective enough for real time sensing on autonomous driving vehicles.", "title": "" }, { "docid": "d786b83c7315b49b6251e27d73983e08", "text": "Memory access efficiency is a key factor in fully utilizing the computational power of graphics processing units (GPUs). However, many details of the GPU memory hierarchy are not released by GPU vendors. In this paper, we propose a novel fine-grained microbenchmarking approach and apply it to three generations of NVIDIA GPUs, namely Fermi, Kepler, and Maxwell, to expose the previously unknown characteristics of their memory hierarchies. Specifically, we investigate the structures of different GPU cache systems, such as the data cache, the texture cache and the translation look-aside buffer (TLB). We also investigate the throughput and access latency of GPU global memory and shared memory. Our microbenchmark results offer a better understanding of the mysterious GPU memory hierarchy, which will facilitate the software optimization and modelling of GPU architectures. To the best of our knowledge, this is the first study to reveal the cache properties of Kepler and Maxwell GPUs, and the superiority of Maxwell in shared memory performance under bank conflict.", "title": "" }, { "docid": "bffd767503e0ab9627fc8637ca3b2efb", "text": "Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., covariance) of the error distribution and thus need many function evaluations with a sizeable number of hyperparameters. This makes them inefficient for optimizing hyperparameters of deep learning algorithms, which are highly expensive to evaluate. In this work, we propose a new deterministic and efficient hyperparameter optimization method that employs radial basis functions as error surrogates. The proposed mixed integer algorithm, called HORD, searches the surrogate for the most promising hyperparameter values through dynamic coordinate search and requires many fewer function evaluations. HORD does well in low dimensions but it is exceptionally better in higher dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural networks demonstrate HORD significantly outperforms the well-established Bayesian optimization methods such as GP, SMAC and TPE. For instance, on average, HORD is more than 6 times faster than GP-EI in obtaining the best configuration of 19 hyperparameters.", "title": "" }, { "docid": "f27e985a97fe7a61ce14c01aa1fd4a41", "text": "We propose a method for learning dictionaries towards sparse approximation of signals defined on vertices of arbitrary graphs. Dictionaries are expected to describe effectively the main spatial and spectral components of the signals of interest, so that their structure is dependent on the graph information and its spectral representation. We first show how operators can be defined for capturing different spectral components of signals on graphs. We then propose a dictionary learning algorithm built on a sparse approximation step and a dictionary update function, which iteratively leads to adapting the structured dictionary to the class of target signals. Experimental results on synthetic and natural signals on graphs demonstrate the efficiency of the proposed algorithm both in terms of sparse approximation and support recovery performance.", "title": "" }, { "docid": "51e65b3be95c641beb9221fb31687adc", "text": "This paper describes a robust localization system, similar to the used by the teams participating in the Robocup Small size league (SLL). The system, developed in Object Pascal, allows real time localization and control of an autonomous omnidirectional mobile robot. The localization algorithm is done resorting to odometry and global vision data fusion, applying an extended Kalman filter, being this method a standard approach for reducing the error in a least squares sense, using measurements from different sources.", "title": "" }, { "docid": "f7a2f86526209860d7ea89d3e7f2b576", "text": "Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes CURATOR, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and EDISON, an NLP data structure library in Java that provides streamlined interactions with CURATOR and offers a range of useful supporting functionality.", "title": "" }, { "docid": "fcfc8cc9ed49f8fd023957156b86281c", "text": "As consumers spend more time on their mobile devices, a focal retailer’s natural approach is to target potential customers in close proximity to its own location. Yet focal (own) location targeting may cannibalize profits on infra-marginal sales. This study demonstrates the effectiveness of competitive locational targeting, the practice of promoting to consumers near a competitor’s location. The analysis is based on a randomized field experiment in which mobile promotions were sent to customers at three similar shopping areas (competitive, focal, and benchmark locations). The results show that competitive locational targeting can take advantage of heightened demand that a focal retailer would not otherwise capture. Competitive locational targeting produced increasing returns to promotional discount depth, whereas targeting the focal location produced decreasing returns to deep discounts, indicating saturation effects and profit cannibalization. These findings are important for marketers, who can use competitive locational targeting to generate incremental sales without cannibalizing profits. While the experiment focuses on the effects of unilateral promotions, it represents the first step in understanding the competitive implications of mobile marketing technologies.", "title": "" }, { "docid": "bdc9bc09af90bd85f64c79cbca766b61", "text": "The inhalation route is frequently used to administer drugs for the management of respiratory diseases such as asthma or chronic obstructive pulmonary disease. Compared with other routes of administration, inhalation offers a number of advantages in the treatment of these diseases. For example, via inhalation, a drug is directly delivered to the target organ, conferring high pulmonary drug concentrations and low systemic drug concentrations. Therefore, drug inhalation is typically associated with high pulmonary efficacy and minimal systemic side effects. The lung, as a target, represents an organ with a complex structure and multiple pulmonary-specific pharmacokinetic processes, including (1) drug particle/droplet deposition; (2) pulmonary drug dissolution; (3) mucociliary and macrophage clearance; (4) absorption to lung tissue; (5) pulmonary tissue retention and tissue metabolism; and (6) absorptive drug clearance to the systemic perfusion. In this review, we describe these pharmacokinetic processes and explain how they may be influenced by drug-, formulation- and device-, and patient-related factors. Furthermore, we highlight the complex interplay between these processes and describe, using the examples of inhaled albuterol, fluticasone propionate, budesonide, and olodaterol, how various sequential or parallel pulmonary processes should be considered in order to comprehend the pulmonary fate of inhaled drugs.", "title": "" }, { "docid": "c2a297417553cb46fd98353d8b8351ac", "text": "Recent advances in methods and techniques enable us to develop an interactive overlay to the global map of science based on aggregated citation relations among the 9,162 journals contained in the Science Citation Index and Social Science Citation Index 2009 combined. The resulting mapping is provided by VOSViewer. We first discuss the pros and cons of the various options: cited versus citing, multidimensional scaling versus spring-embedded algorithms, VOSViewer versus Gephi, and the various clustering algorithms and similarity criteria. Our approach focuses on the positions of journals in the multidimensional space spanned by the aggregated journal-journal citations. A number of choices can be left to the user, but we provide default options reflecting our preferences. Some examples are also provided; for example, the potential of using this technique to assess the interdisciplinarity of organizations and/or document sets.", "title": "" }, { "docid": "0c7221ffca357ba80401551333e1080d", "text": "The effects of temperature and current on the resistance of small geometry silicided contact structures have been characterized and modeled for the first time. Both, temperature and high current induced self heating have been shown to cause contact resistance lowering which can be significant in the performance of advanced ICs. It is demonstrated that contact-resistance sensitivity to temperature and current is controlled by the silicide thickness which influences the interface doping concentration, N. Behavior of W-plug and force-fill (FF) Al plug contacts have been investigated in detail. A simple model has been formulated which directly correlates contact resistance to temperature and N. Furthermore, thermal impedance of these contact structures have been extracted and a critical failure temperature demonstrated that can be used to design robust contact structures.", "title": "" }, { "docid": "9fdecc8854f539ddf7061c304616130b", "text": "This paper describes the pricing strategy model deployed at Airbnb, an online marketplace for sharing home and experience. The goal of price optimization is to help hosts who share their homes on Airbnb set the optimal price for their listings. In contrast to conventional pricing problems, where pricing strategies are applied to a large quantity of identical products, there are no \"identical\" products on Airbnb, because each listing on our platform offers unique values and experiences to our guests. The unique nature of Airbnb listings makes it very difficult to estimate an accurate demand curve that's required to apply conventional revenue maximization pricing strategies.\n Our pricing system consists of three components. First, a binary classification model predicts the booking probability of each listing-night. Second, a regression model predicts the optimal price for each listing-night, in which a customized loss function is used to guide the learning. Finally, we apply additional personalization logic on top of the output from the second model to generate the final price suggestions. In this paper, we focus on describing the regression model in the second stage of our pricing system. We also describe a novel set of metrics for offline evaluation. The proposed pricing strategy has been deployed in production to power the Price Tips and Smart Pricing tool on Airbnb. Online A/B testing results demonstrate the effectiveness of the proposed strategy model.", "title": "" }, { "docid": "9cc30ebeb2b51dbf70732a8df7c7fda2", "text": "This paper provides a summary of the 2007 Mars Design Reference Architecture 5.0 (DRA 5.0) [1], which is the latest in a series of NASA Mars reference missions. It provides a vision of one potential approach to human Mars exploration, including how Constellation systems could be used. The strategy and example implementation concepts that are described here should not be viewed as constituting a formal plan for the human exploration of Mars, but rather provide a common framework for future planning of systems concepts, technology development, and operational testing as well as potential Mars robotic missions, research that is conducted on the International Space Station, and future potential lunar exploration missions. This summary of the Mars DRA 5.0 provides an overview of the overall mission approach, surface strategy and exploration goals, as well as the key systems and challenges for the first three concepts for human missions to Mars.1,2", "title": "" } ]
scidocsrr
57450dda0a0a86d5c44eece0724e9293
Real Time Implementation of RTOS based Vehicle Tracking System
[ { "docid": "f5519eff0c13e0ee42245fdf2627b8ae", "text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.", "title": "" } ]
[ { "docid": "11cf39e49d5365f78ff849c8800fe724", "text": "Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors.", "title": "" }, { "docid": "30997f1a8b350df688a8d85b3f7782a6", "text": "This paper proposes a facial expression recognition (FER) method in videos. The proposed method automatically selects the peak expression face from a video sequence using closeness of the face to the neutral expression. The severely non-frontal faces and poorly aligned faces are discarded in advance to eliminate their negative effects on the peak expression face selection and FER. To reduce the effect of the facial identity in the feature extraction, we compute difference information between the peak expression face and its intra class variation (ICV) face. An ICV face is generated by combining the training faces of an expression class and looks similar to the peak expression face in identity. Because the difference information is defined as the distances of locally pooled texture features between the two faces, the feature extraction is robust to face rotation and mis-alignment. Results show that the proposed method is practical with videos containing spontaneous facial expressions and pose variations. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "4408d485de63034cb2225ee7aa9e3afe", "text": "We present the characterization of dry spiked biopotential electrodes and test their suitability to be used in anesthesia monitoring systems based on the measurement of electroencephalographic signals. The spiked electrode consists of an array of microneedles penetrating the outer skin layers. We found a significant dependency of the electrode-skin-electrode impedance (ESEI) on the electrode size (i.e., the number of spikes) and the coating material of the spikes. Electrodes larger than 3/spl times/3 mm/sup 2/ coated with Ag-AgCl have sufficiently low ESEI to be well suited for electroencephalograph (EEG) recordings. The maximum measured ESEI was 4.24 k/spl Omega/ and 87 k/spl Omega/, at 1 kHz and 0.6 Hz, respectively. The minimum ESEI was 0.65 k/spl Omega/ an 16 k/spl Omega/, at the same frequencies. The ESEI of spiked electrodes is stable over an extended period of time. The arithmetic mean of the generated DC offset voltage is 11.8 mV immediately after application on the skin and 9.8 mV after 20-30 min. A spectral study of the generated potential difference revealed that the AC part was unstable at frequencies below approximately 0.8 Hz. Thus, the signal does not interfere with a number of clinical applications using real-time EEG. Comparing raw EEG recordings of the spiked electrode with commercial Zipprep electrodes showed that both signals were similar. Due to the mechanical strength of the silicon microneedles and the fact that neither skin preparation nor electrolytic gel is required, use of the spiked electrode is convenient. The spiked electrode is very comfortable for the patient.", "title": "" }, { "docid": "ab11e7eda0563fd482c408aca673f436", "text": "We present Gray S-box for advanced encryption standard. Gray S-box is constructed by adding binary Gray code transformation as a preprocessing step to original AES S-box. Gray S-box corresponds to a polynomial with all 255 non-zero terms in comparison with 9-term polynomial of original AES S-box. This increases the security for S-box against algebraic attacks and interpolation attacks. Besides, as Gray S-box reuses AES S-box as a whole, Gray S-box inherits all advantages and efficiency of any existing optimized implementation of AES S-box. Gray S-box also achieves important cryptographic properties of AES S-box, including strict avalanche criterion, nonlinearity, and differential uniformity.", "title": "" }, { "docid": "19ee248acdd9282c5b5b45fd51f44463", "text": "The primal role that the amyloid-β (Aβ) peptide has in the development of Alzheimer's disease is now almost universally accepted. It is also well recognized that Aβ exists in multiple assembly states, which have different physiological or pathophysiological effects. Although the classical view is that Aβ is deposited extracellularly, emerging evidence from transgenic mice and human patients indicates that this peptide can also accumulate intraneuronally, which may contribute to disease progression.", "title": "" }, { "docid": "407aac76e58ede9ff7a7803a6aade872", "text": "BACKGROUND\nPlantar heel pain is a commonly occurring foot complaint. Stretching is frequently utilised as a treatment, yet a systematic review focusing only on its effectiveness has not been published. This review aimed to assess the effectiveness of stretching on pain and function in people with plantar heel pain.\n\n\nMETHODS\nMedline, EMBASE, CINAHL, AMED, and The Cochrane Library were searched from inception to July 2010. Studies fulfilling the inclusion criteria were independently assessed, and their quality evaluated using the modified PEDro scale.\n\n\nRESULTS\nSix studies including 365 symptomatic participants were included. Two compared stretching with a control, one study compared stretching to an alternative intervention, one study compared stretching to both alternative and control interventions, and two compared different stretching techniques and durations. Quality rating on the modified Pedro scale varied from two to eight out of a maximum of ten points. The methodologies and interventions varied significantly between studies, making meta-analysis inappropriate. Most participants improved over the course of the studies, but when stretching was compared to alternative or control interventions, the changes only reached statistical significance in one study that used a combination of calf muscle stretches and plantar fascia stretches in their stretching programme. Another study comparing different stretching techniques, showed a statistically significant reduction in some aspects of pain in favour of plantar fascia stretching over calf stretches in the short term.\n\n\nCONCLUSIONS\nThere were too few studies to assess whether stretching is effective compared to control or other interventions, for either pain or function. However, there is some evidence that plantar fascia stretching may be more effective than Achilles tendon stretching alone in the short-term. Appropriately powered randomised controlled trials, utilizing validated outcome measures, blinded assessors and long-term follow up are needed to assess the efficacy of stretching.", "title": "" }, { "docid": "76d59eaa0e2862438492b55f893ceea3", "text": "The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on‐site and on‐time. At this point, the use of smart cameras ‐ of which the popularity has been increasing ‐ is one step ahead. With sensors and Digital Signal Processors (DSPs), smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image‐processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high‐ bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general‐purpose processors. In smart cameras ‐ which are real‐life applications of such methods ‐ the widest use is on DSPs. In the present study, the Viola‐Jones face detection method ‐ which was reported to run faster on PCs ‐ was optimized for DSPs; the face recognition method was combined with the developed sub‐region and mask‐based DCT (Discrete Cosine Transform). As the employed DSP is a fixed‐point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub‐ regions and from each sub‐region the robust coefficients against disruptive elements ‐ like face expression, illumination, etc. ‐ were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis) and then employed for recognition. Thanks to its operational convenience, codes that were optimized for a DSP received a functional test after the computer simulation. In these functional tests, the face recognition system attained a 97.4% success rate on the most popular face database: the FRGC.", "title": "" }, { "docid": "679d1c25d707c099de2f8ed5f0a09612", "text": "BACKGROUND\nAlterations in glenohumeral range of motion, including increased posterior shoulder tightness and glenohumeral internal rotation deficit that exceeds the accompanying external rotation gain, are suggested contributors to throwing-related shoulder injuries such as pathologic internal impingement. Yet these contributors have not been identified in throwers with internal impingement.\n\n\nHYPOTHESIS\nThrowers with pathologic internal impingement will exhibit significantly increased posterior shoulder tightness and glenohumeral internal rotation deficit without significantly increased external rotation gain.\n\n\nSTUDY DESIGN\nCase control study; Level of evidence, 3.\n\n\nMETHODS\nEleven throwing athletes with pathologic internal impingement diagnosed using both clinical examination and a magnetic resonance arthrogram were demographically matched with 11 control throwers who had no history of upper extremity injury. Passive glenohumeral internal and external rotation were measured bilaterally with standard goniometry at 90 degrees of humeral abduction and elbow flexion. Bilateral differences in glenohumeral range of motion were used to calculate glenohumeral internal rotation deficit and external rotation gain. Posterior shoulder tightness was quantified as the bilateral difference in passive shoulder horizontal adduction with the scapula retracted and the shoulder at 90 degrees of elevation. Comparisons were made between groups with dependent t tests (P < .05).\n\n\nRESULTS\nThe throwing athletes with internal impingement demonstrated significantly greater glenohumeral internal rotation deficit (P = .03) and posterior shoulder tightness (P = .03) compared with the control subjects. No significant differences were observed in external rotation gain between groups (P = .16).\n\n\nCLINICAL RELEVANCE\nThese findings could indicate that a tightening of the posterior elements of the shoulder (capsule, rotator cuff) may contribute to impingement. The results suggest that management should include stretching to restore flexibility to the posterior shoulder.", "title": "" }, { "docid": "e35994d3f2cb82666115a001dbd002d0", "text": "Filtering relevant documents with respect to entities is an essential task in the context of knowledge base construction and maintenance. It entails processing a time-ordered stream of documents that might be relevant to an entity in order to select only those that contain vital information. State-of-the-art approaches to document filtering for popular entities are entity-dependent: they rely on and are also trained on the specifics of differentiating features for each specific entity. Moreover, these approaches tend to use so-called extrinsic information such as Wikipedia page views and related entities which is typically only available only for popular head entities. Entity-dependent approaches based on such signals are therefore ill-suited as filtering methods for long-tail entities. In this paper we propose a document filtering method for long-tail entities that is entity-independent and thus also generalizes to unseen or rarely seen entities. It is based on intrinsic features, i.e., features that are derived from the documents in which the entities are mentioned. We propose a set of features that capture informativeness, entity-saliency, and timeliness. In particular, we introduce features based on entity aspect similarities, relation patterns, and temporal expressions and combine these with standard features for document filtering. Experiments following the TREC KBA 2014 setup on a publicly available dataset show that our model is able to improve the filtering performance for long-tail entities over several baselines. Results of applying the model to unseen entities are promising, indicating that the model is able to learn the general characteristics of a vital document. The overall performance across all entities---i.e., not just long-tail entities---improves upon the state-of-the-art without depending on any entity-specific training data.", "title": "" }, { "docid": "db2553268fc3ccaddc3ec7077514655c", "text": "Aspect extraction is a task to abstract the common properties of objects from corpora discussing them, such as reviews of products. Recent work on aspect extraction is leveraging the hierarchical relationship between products and their categories. However, such effort focuses on the aspects of child categories but ignores those from parent categories. Hence, we propose an LDA-based generative topic model inducing the two-layer categorical information (CAT-LDA), to balance the aspects of both a parent category and its child categories. Our hypothesis is that child categories inherit aspects from parent categories, controlled by the hierarchy between them. Experimental results on 5 categories of Amazon.com products show that both common aspects of parent category and the individual aspects of subcategories can be extracted to align well with the common sense. We further evaluate the manually extracted aspects of 16 products, resulting in an average hit rate of 79.10%.", "title": "" }, { "docid": "7ed58e8ec5858bdcb5440123aea57bb1", "text": "The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows, Mac, iPhone, and Android smartphone.", "title": "" }, { "docid": "99b2cf752848a5b787b378719dc934f1", "text": "This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.", "title": "" }, { "docid": "3157970218dc3761576345c0e01e3121", "text": "This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu", "title": "" }, { "docid": "462256d2d428f8c77269e4593518d675", "text": "This paper is devoted to the modeling of real textured images by functional minimization and partial differential equations. Following the ideas of Yves Meyer in a total variation minimization framework of L. Rudin, S. Osher, and E. Fatemi, we decompose a given (possible textured) image f into a sum of two functions u+v, where u ¥ BV is a function of bounded variation (a cartoon or sketchy approximation of f), while v is a function representing the texture or noise. To model v we use the space of oscillating functions introduced by Yves Meyer, which is in some sense the dual of the BV space. The new algorithm is very simple, making use of differential equations and is easily solved in practice. Finally, we implement the method by finite differences, and we present various numerical results on real textured images, showing the obtained decomposition u+v, but we also show how the method can be used for texture discrimination and texture segmentation.", "title": "" }, { "docid": "471579f955f8b68a357c8780a7775cc9", "text": "In addition to practitioners who care for male patients, with the increased use of high-resolution anoscopy, practitioners who care for women are seeing more men in their practices as well. Some diseases affecting the penis can impact on their sexual partners. Many of the lesions and neoplasms of the penis occur on the vulva as well. In addition, there are common and rare lesions unique to the penis. A review of the scope of penile lesions and neoplasms that may present in a primary care setting is presented to assist in developing a differential diagnosis if such a patient is encountered, as well as for practitioners who care for their sexual partners. A familiarity will assist with recognition, as well as when consultation is needed.", "title": "" }, { "docid": "ddb804eec29ebb8d7f0c80223184305a", "text": "Near Field Communication (NFC) enables physically proximate devices to communicate over very short ranges in a peer-to-peer manner without incurring complex network configuration overheads. However, adoption of NFC-enabled applications has been stymied by the low levels of penetration of NFC hardware. In this paper, we address the challenge of enabling NFC-like capability on the existing base of mobile phones. To this end, we develop Dhwani, a novel, acoustics-based NFC system that uses the microphone and speakers on mobile phones, thus eliminating the need for any specialized NFC hardware. A key feature of Dhwani is the JamSecure technique, which uses self-jamming coupled with self-interference cancellation at the receiver, to provide an information-theoretically secure communication channel between the devices. Our current implementation of Dhwani achieves data rates of up to 2.4 Kbps, which is sufficient for most existing NFC applications.", "title": "" }, { "docid": "00b73790bb0bb2b828e1d443d3e13cf4", "text": "Grippers and robotic hands are an important field in robotics. Recently, the combination of grasping devices and haptic feedback has been a promising avenue for many applications such as laparoscopic surgery and spatial telemanipulation. This paper presents the work behind a new selfadaptive, a.k.a. underactuated, gripper with a proprioceptive haptic feedback in which the apparent stiffness of the gripper as seen by its actuator is used to estimate contact location. This system combines many technologies and concepts in an integrated mechatronic tool. Among them, underactuated grasping, haptic feedback, compliant joints and a differential seesaw mechanism are used. Following a theoretical modeling of the gripper based on the virtual work principle, the authors present numerical data used to validate this model. Then, a presentation of the practical prototype is given, discussing the sensors, controllers, and mechanical architecture. Finally, the control law and the experimental validation of the haptic feedback are presented.", "title": "" }, { "docid": "c44f971f063f8594985a98beb897464a", "text": "In recent years, multi-agent epistemic planning has received attention from both dynamic logic and planning communities. Existing implementations of multi-agent epistemic planning are based on compilation into classical planning and suffer from various limitations, such as generating only linear plans, restriction to public actions, and incapability to handle disjunctive beliefs. In this paper, we propose a general representation language for multi-agent epistemic planning where the initial KB and the goal, the preconditions and effects of actions can be arbitrary multi-agent epistemic formulas, and the solution is an action tree branching on sensing results. To support efficient reasoning in the multi-agent KD45 logic, we make use of a normal form called alternating cover disjunctive formulas (ACDFs). We propose basic revision and update algorithms for ACDFs. We also handle static propositional common knowledge, which we call constraints. Based on our reasoning, revision and update algorithms, adapting the PrAO algorithm for contingent planning from the literature, we implemented a multi-agent epistemic planner called MEPK. Our experimental results show the viability of our approach.", "title": "" }, { "docid": "ddae1c6469769c2c7e683bfbc223ad1a", "text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.", "title": "" } ]
scidocsrr
bde0012476a149e2e5bdeb0f7e2f64f6
Improvement of Torque Capability of Permanent-Magnet Motor by Using Hybrid Rotor Configuration
[ { "docid": "2ec37b57a75c70e9edeb9603b0dac5e0", "text": "In this paper, different analysis and design techniques are used to analyze the drive motor in the 2004 Prius hybrid vehicle and to examine alternative spoke-type magnet rotor (buried magnets with magnetization which is orthogonal to the radial direction) and induction motor arrangements. These machines are characterized by high transient torque requirement, compactness, and forced cooling. While rare-earth magnet machines are commonly used in these applications, there is an increasing interest in motors without magnets, hence the investigation of an induction motor. This paper illustrates that the machines operate under highly saturated conditions at high torque and that care should be taken when selecting the correct analysis technique. This is illustrated by divergent results when using I-Psi loops and dq techniques to calculate the torque.", "title": "" }, { "docid": "3a8de50db1b3353797345bb35942df2b", "text": "This paper proposes a method of reducing total torque ripple by magnets shifting. The key of this method is to consider the reluctance torque ripple, because the reluctance torque ripple is the main source of torque ripple in inset permanent magnet synchronous motor. This method is realized by appropriately choosing Repeating Unit, which indicates a group of poles producing torques with consistency in waveforms and phases. Meanwhile, the uniform analytical expressions of torques with magnets shifting are established, including cogging torque, reluctance torque, and total torque. Moreover, the total torque ripple can be reduced greatly, and the average total torque loss is acceptable when one pole-pair is chosen as one Repeating Unit and the shifting angles of Repeating Unit are 3.75 and 1.875 mechanical degrees. These theoretical analyses are verified by the finite-element method.", "title": "" } ]
[ { "docid": "fe9ed5460f3973636a41878ff5d06524", "text": "Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations. This lack in performance has been attributed to an inability to model the influence of high-level image features such as objects. Recent seminal advances in applying deep neural networks to tasks like object recognition suggests that they are able to capture this kind of structure. However, the enormous amount of training data necessary to train these networks makes them difficult to apply directly to saliency prediction. We present a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction. Using the well-known network of Krizhevsky et al., 2012, we come up with a new saliency model that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark. We show that the structure of this network allows new insights in the psychophysics of fixation selection and potentially their neural implementation. To train our network, we build on recent work on the modeling of saliency as point processes. By understanding how humans choose eye fixations, we can hope to understand and explain human behaviour in a number of vision-related tasks. For this reason human eye movements have been studied for more than 80 years (e.g., Buswell, 1935). During the last 20 years, many models have been developed trying to explain fixations in terms of so called “saliency maps”. Recently, it has been suggested to model saliency maps probabilistically using point processes (Barthelmé et al., 2013) and to evaluate them using loglikelihood (Kümmerer et al., 2014). This evaluation revealed that state-of-theart models of saliency explain only one third of the explainable information in the spatial fixation structure (Kümmerer et al., 2014). Most of the existing models use low-level cues like edge-detectors and color filters (Itti et al., 1998) or local image statistics (Zhang et al., 2008; Bruce and 1 ar X iv :1 41 1. 10 45 v1 [ cs .C V ] 4 N ov 2 01 4", "title": "" }, { "docid": "0c1381eb866a42da820a2b18442938e7", "text": "We present a new method that learns to segment and cluster images without labels of any kind. A simple loss based on information theory is used to extract meaningful representations directly from raw images. This is achieved by maximising mutual information of images known to be related by spatial proximity or randomized transformations, which distills their shared abstract content. Unlike much of the work in unsupervised deep learning, our learned function outputs segmentation heatmaps and discrete classifications labels directly, rather than embeddings that need further processing to be usable. The loss can be formulated as a convolution, making it the first end-to-end unsupervised learning method that learns densely and efficiently (i.e. without sampling) for semantic segmentation. Implemented using realistic settings on generic deep neural network architectures, our method attains superior performance on COCO-Stuff and ISPRS-Potsdam for segmentation and STL for clustering, beating state-of-the-art baselines.", "title": "" }, { "docid": "78ae476295aa266a170a981a34767bdd", "text": "Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.", "title": "" }, { "docid": "a6e09b646c68dec48b003060f402d427", "text": "This research explores the relationship between permeability and crack width in cracked, steel fiber-reinforced con addition, it inspects the influence of steel fiber reinforcement on concrete permeability. The feedback-controlled splitting tension ~also known as the Brazilian test ! is used to induce cracks of up to 500 mm ~0.02 in.! in concrete specimens without reinforcement, and w steel fiber reinforcement volumes of both 0.5 and 1%. The cracks relax after induced cracking. The steel fibers decrease the pe of specimens with relaxed cracks larger than 100 mm. DOI: 10.1061/ ~ASCE!0899-1561~2002!14:4~355! CE Database keywords: Permeability; Cracking; Fiber reinforced materials; Concrete. ular ties ani gh an rela rtie ect ue es, bar bilto be ica tly tee de elp in ding utful ucete, ber s. 00, n then", "title": "" }, { "docid": "34f784b003f6c9083a5e2c10046d545f", "text": "Granulomatous pigmented purpuric dermatosis (GPPD) is a rare histologic variant of pigmented purpuric dermatosis (PPD). It includes classic histology changes of PPD with superimposed granulomas. This variant is thought to be associated with hyperlipidemia and is found predominantly in individuals in the Far East; however, a review of the literature that included 26 documented cases of GPPD revealed these associations might be becoming less clear. We report a case of GPPD in an elderly white man who had an eruption involving the majority of the lower legs.", "title": "" }, { "docid": "1fcbc7d6c408d00d3bd1e225e28a32cc", "text": "Active learning aims to train an accurate prediction model with minimum cost by labeling most informative instances. In this paper, we survey existing works on active learning from an instance-selection perspective and classify them into two categories with a progressive relationship: (1) active learning merely based on uncertainty of independent and identically distributed (IID) instances, and (2) active learning by further taking into account instance correlations. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/weaknesses, followed by a simple runtime performance comparison, and discussion about emerging active learning applications and instance-selection challenges therein. This survey intends to provide a high-level summarization for active learning and motivates interested readers to consider instance-selection approaches for designing effective active learning solutions.", "title": "" }, { "docid": "e4f648d12495a2d7615fe13c84f35bbe", "text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.", "title": "" }, { "docid": "f9fdb3d0719ebbc8d84702c3558e9eac", "text": "Since falls are a major public health problem among older people, the number of systems aimed at detecting them has increased dramatically over recent years. This work presents an extensive literature review of fall detection systems, including comparisons among various kinds of studies. It aims to serve as a reference for both clinicians and biomedical engineers planning or conducting field investigations. Challenges, issues and trends in fall detection have been identified after the reviewing work. The number of studies using context-aware techniques is still increasing but there is a new trend towards the integration of fall detection into smartphones as well as the use of machine learning methods in the detection algorithm. We have also identified challenges regarding performance under real-life conditions, usability, and user acceptance as well as issues related to power consumption, real-time operations, sensing limitations, privacy and record of real-life falls.", "title": "" }, { "docid": "0c01132904f2c580884af1391069addd", "text": "BACKGROUND\nThe inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches.\n\n\nMETHODS\nA systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis.\n\n\nRESULTS\nThe processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity.\n\n\nCONCLUSION\nOn the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality.", "title": "" }, { "docid": "95e89119b672d76a9cd5ada7b2ae7362", "text": "The aim of this project is to optimize an Arithmetic Logical Unit with BIST capability. Arithmetic Logical Unit is used in many processing and computing devices, due to rapid development of technology the faster arithmetic and logical unit which consume less power and area required.Due to the increasing integration complexities of IC‘s the Optimized Arithmetic Logical Unit implement sometimes may mal-function, so testing capability must be provide and this is accomplished by Built-In-Self-Test (BIST).So this project has been done with the help of Verilog Hardware Description Language, Simulated by Xilinx10.1 Software and is synthesized by cadence tool. After synthesis Area and power are reduced by 31% and 42% respectively. KeywordsOptimized ALU, Ripple Carry Adder, Vedic Multiplier, Built-In-Self-Test.", "title": "" }, { "docid": "be0ba5b90102aab7cbee08a29333be93", "text": "Test-driven development (TDD) has been proposed as a solution to improve testing in Industry and in academia. The purpose of this poster is to outline the challenges of teaching a novel Test-First approach in a Level 8 course on Software Testing. Traditionally, introductory programming and software testing courses teach a test-last approach. After the introduction of the Extreme Programming version of AGILE, industry and academia have slowly shifted their focus to the Test-First approach. This poster paper is a pedagogical insight into this shift from the test-last to the test-first approach known as Test Driven Development (TDD).", "title": "" }, { "docid": "5cf396e42e8708d768235f95bc8f227f", "text": "This thesis examines how artificial neural networks can benefit a large vocabulary, speaker independent, continuous speech recognition system. Currently, most speech recognition systems are based on hidden Markov models (HMMs), a statistical framework that supports both acoustic and temporal modeling. Despite their state-of-the-art performance, HMMs make a number of suboptimal modeling assumptions that limit their potential effectiveness. Neural networks avoid many of these assumptions, while they can also learn complex functions, generalize effectively, tolerate noise, and support parallelism. While neural networks can readily be applied to acoustic modeling, it is not yet clear how they can be used for temporal modeling. Therefore, we explore a class of systems called NN-HMM hybrids, in which neural networks perform acoustic modeling, and HMMs perform temporal modeling. We argue that a NN-HMM hybrid has several theoretical advantages over a pure HMM system, including better acoustic modeling accuracy, better context sensitivity, more natural discrimination, and a more economical use of parameters. These advantages are confirmed experimentally by a NN-HMM hybrid that we developed, based on context-independent phoneme models, that achieved 90.5% word accuracy on the Resource Management database, in contrast to only 86.0% accuracy achieved by a pure HMM under similar conditions. In the course of developing this system, we explored two different ways to use neural networks for acoustic modeling: prediction and classification. We found that predictive networks yield poor results because of a lack of discrimination, but classification networks gave excellent results. We verified that, in accordance with theory, the output activations of a classification network form highly accurate estimates of the posterior probabilities P(class|input), and we showed how these can easily be converted to likelihoods P(input|class) for standard HMM recognition algorithms. Finally, this thesis reports how we optimized the accuracy of our system with many natural techniques, such as expanding the input window size, normalizing the inputs, increasing the number of hidden units, converting the network’s output activations to log likelihoods, optimizing the learning rate schedule by automatic search, backpropagating error from word level outputs, and using gender dependent networks.", "title": "" }, { "docid": "4e182b30dcbc156e2237e7d1d22d5c93", "text": "A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.", "title": "" }, { "docid": "ca50f634d24d4cd00a079e496d00e4b2", "text": "We designed and implemented a fork-type automatic guided vehicle (AGV) with a laser guidance system. Most previous AGVs have used two types of guidance systems: magnetgyro and wire guidance. However, these guidance systems have high costs, are difficult to maintain with changes in the operating environment, and can drive only a pre-determined path with installed sensors. A laser guidance system was developed for addressing these issues, but limitations including slow response time and low accuracy remain. We present a laser guidance system and control system for AGVs with laser navigation. For analyzing the performance of the proposed system, we designed and built a fork-type AGV, and performed repetitions of our experiments under the same working conditions. The results show an average positioning error of 51.76 mm between the simulated driving path and the driving path of the actual fork-type AGV. Consequently, we verified that the proposed method is effective and suitable for use in actual AGVs.", "title": "" }, { "docid": "9ce14872fe5556573b9e17c9ec141e6c", "text": "This paper presents an integrated design method for pedestrian avoidance by considering the interaction between trajectory planning and trajectory tracking. This method aims to reduce the need for control calibration by properly considering plant uncertainties and tire force limits at the design stage. Two phases of pedestrian avoidance—trajectory planning and trajectory tracking—are designed in an integrated manner. The available tire force is distributed to the feedforward part, which is used to generate the nominal trajectory in trajectory planning phase, and to the feedback part, which is used for trajectory tracking. The trajectory planning problem is solved not by searching through a continuous spectrum of steering/braking actions, but by examining a limited set of “motion primitives,” or motion templates that can be adopted in sequence to avoid the pedestrian. An emergency rapid random tree (RRT) methodology is proposed to quickly identify a feasible solution. Subsequently, in order to guarantee accuracy and provide safety margin in trajectory tracking with presence of model uncertainties and exogenous disturbance, a simplified LQR-based funnel algorithm is proposed. Simulation results provide insight into how pedestrian collisions can be avoided under given initial vehicle and pedestrian states.", "title": "" }, { "docid": "31a1a5ce4c9a8bc09cbecb396164ceb4", "text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.", "title": "" }, { "docid": "c6ad70b8b213239b0dd424854af194e2", "text": "The neural mechanisms underlying the processing of conventional and novel conceptual metaphorical sentences were examined with event-related potentials (ERPs). Conventional metaphors were created based on the Contemporary Theory of Metaphor and were operationally defined as familiar and readily interpretable. Novel metaphors were unfamiliar and harder to interpret. Using a sensicality judgment task, we compared ERPs elicited by the same target word when it was used to end anomalous, novel metaphorical, conventional metaphorical and literal sentences. Amplitudes of the N400 ERP component (320-440 ms) were more negative for anomalous sentences, novel metaphors, and conventional metaphors compared with literal sentences. Within a later window (440-560 ms), ERPs associated with conventional metaphors converged to the same level as literal sentences while the novel metaphors stayed anomalous throughout. The reported results were compatible with models assuming an initial stage for metaphor mappings from one concept to another and that these mappings are cognitively taxing.", "title": "" }, { "docid": "6aee20acd54b5d6f2399106075c9fee1", "text": "BACKGROUND\nThe aim of this study was to compare the effectiveness of the ampicillin plus ceftriaxone (AC) and ampicillin plus gentamicin (AG) combinations for treating Enterococcus faecalis infective endocarditis (EFIE).\n\n\nMETHODS\nAn observational, nonrandomized, comparative multicenter cohort study was conducted at 17 Spanish and 1 Italian hospitals. Consecutive adult patients diagnosed of EFIE were included. Outcome measurements were death during treatment and at 3 months of follow-up, adverse events requiring treatment withdrawal, treatment failure requiring a change of antimicrobials, and relapse.\n\n\nRESULTS\nA larger percentage of AC-treated patients (n = 159) had previous chronic renal failure than AG-treated patients (n = 87) (33% vs 16%, P = .004), and AC patients had a higher incidence of cancer (18% vs 7%, P = .015), transplantation (6% vs 0%, P = .040), and healthcare-acquired infection (59% vs 40%, P = .006). Between AC and AG-treated EFIE patients, there were no differences in mortality while on antimicrobial treatment (22% vs 21%, P = .81) or at 3-month follow-up (8% vs 7%, P = .72), in treatment failure requiring a change in antimicrobials (1% vs 2%, P = .54), or in relapses (3% vs 4%, P = .67). However, interruption of antibiotic treatment due to adverse events was much more frequent in AG-treated patients than in those receiving AC (25% vs 1%, P < .001), mainly due to new renal failure (≥25% increase in baseline creatinine concentration; 23% vs 0%, P < .001).\n\n\nCONCLUSIONS\nAC appears as effective as AG for treating EFIE patients and can be used with virtually no risk of renal failure and regardless of the high-level aminoglycoside resistance status of E. faecalis.", "title": "" }, { "docid": "6ce2529ff446db2d647337f30773cdc9", "text": "The physical demands in soccer have been studied intensively, and the aim of the present review is to provide an overview of metabolic changes during a game and their relation to the development of fatigue. Heart-rate and body-temperature measurements suggest that for elite soccer players the average oxygen uptake during a match is around 70% of maximum oxygen uptake (VO2max). A top-class player has 150 to 250 brief intense actions during a game, indicating that the rates of creatine-phosphate (CP) utilization and glycolysis are frequently high during a game, which is supported by findings of reduced muscle CP levels and severalfold increases in blood and muscle lactate concentrations. Likewise, muscle pH is lowered and muscle inosine monophosphate (IMP) elevated during a soccer game. Fatigue appears to occur temporarily during a game, but it is not likely to be caused by elevated muscle lactate, lowered muscle pH, or change in muscle-energy status. It is unclear what causes the transient reduced ability of players to perform maximally. Muscle glycogen is reduced by 40% to 90% during a game and is probably the most important substrate for energy production, and fatigue toward the end of a game might be related to depletion of glycogen in some muscle fibers. Blood glucose and catecholamines are elevated and insulin lowered during a game. The blood free-fatty-acid levels increase progressively during a game, probably reflecting an increasing fat oxidation compensating for the lowering of muscle glycogen. Thus, elite soccer players have high aerobic requirements throughout a game and extensive anaerobic demands during periods of a match leading to major metabolic changes, which might contribute to the observed development of fatigue during and toward the end of a game.", "title": "" }, { "docid": "22eac984935d6040db2ab96eeb5d2bc9", "text": "Under frequency load shedding (UFLS) and under voltage load shedding (UVLS) are attracting more attention, as large disturbances occur more frequently than in the past. Usually, these two schemes work independently from each other, and are not designed in an integrated way to exploit their combined effect on load shedding. Besides, reactive power is seldom considered in the load shedding process. To fill this gap, we propose in this paper a new centralized, adaptive load shedding algorithm, which uses both voltage and frequency information provided by phasor measurement units (PMUs). The main contribution of the new method is the consideration of reactive power together with active power in the load shedding strategy. Therefore, this method addresses the combined voltage and frequency stability issues better than the independent approaches. The new method is tested on the IEEE 39-Bus system, in order to compare it with other methods. Simulation results show that, after large disturbance, this method can bring the system back to a new stable steady state that is better from the point of view of frequency and voltage stability, and loadability.", "title": "" } ]
scidocsrr
597a522cf3f7df07ec98e60559dc94f2
A S ] 8 M ay 2 01 8 Capsule Networks for Low Resource Spoken Language Understanding
[ { "docid": "63914ebf92c3c4d84df96f9b965bea5b", "text": "In this paper we study different types of Recurrent Neural Networks (RNN) for sequence labeling tasks. We propose two new variants of RNNs integrating improvements for sequence labeling, and we compare them to the more traditional Elman and Jordan RNNs. We compare all models, either traditional or new, on four distinct tasks of sequence labeling: two on Spoken Language Understanding (ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the Penn Treebank (PTB) corpora. The results show that our new variants of RNNs are always more effective than the others.", "title": "" } ]
[ { "docid": "5a27ac14c13ef7c7cf9d6fd1b535d03e", "text": "Great database systems performance relies heavily on index tuning, i.e., creating and utilizing the best indices depending on the workload. However, the complexity of the index tuning process has dramatically increased in recent years due to ad-hoc workloads and shortage of time and system resources to invest in tuning.\n This paper introduces holistic indexing, a new approach to automated index tuning in dynamic environments. Holistic indexing requires zero set-up and tuning effort, relying on adaptive index creation as a side-effect of query processing. Indices are created incrementally and partially;they are continuously refined as we process more and more queries. Holistic indexing takes the state-of-the-art adaptive indexing ideas a big step further by introducing the notion of a system which never stops refining the index space, taking educated decisions about which index we should incrementally refine next based on continuous knowledge acquisition about the running workload and resource utilization. When the system detects idle CPU cycles, it utilizes those extra cycles by refining the adaptive indices which are most likely to bring a benefit for future queries. Such idle CPU cycles occur when the system cannot exploit all available cores up to 100%, i.e., either because the workload is not enough to saturate the CPUs or because the current tasks performed for query processing are not easy to parallelize to the point where all available CPU power is exploited.\n In this paper, we present the design of holistic indexing for column-oriented database architectures and we discuss a detailed analysis against parallel versions of state-of-the-art indexing and adaptive indexing approaches. Holistic indexing is implemented in an open-source column-store DBMS. Our detailed experiments on both synthetic and standard benchmarks (TPC-H) and workloads (SkyServer) demonstrate that holistic indexing brings significant performance gains by being able to continuously refine the physical design in parallel to query processing, exploiting any idle CPU resources.", "title": "" }, { "docid": "df17bdf6da805379342e988312445824", "text": "The importance of bringing causality into play when designing feature selection methods is more and more acknowledged in the machine learning community. This paper proposes a filter approach based on information theory which aims to prioritise direct causal relationships in feature selection problems where the ratio between the number of features and the number of samples is high. This approach is based on the notion of interaction which is shown to be informative about the relevance of an input subset as well as its causal relationship with the target. The resulting filter, called mIMR (min-Interaction Max-Relevance), is compared with state-of-the-art approaches. Classification results on 25 real microarray datasets show that the incorporation of causal aspects in the feature assessment is beneficial both for the resulting accuracy and stability. A toy example of causal discovery shows the effectiveness of the filter for identifying direct causal relationships.", "title": "" }, { "docid": "43b2912b6ad9824e3263ff9951daf0c2", "text": "Monolingual alignment models have been shown to boost the performance of question answering systems by ”bridging the lexical chasm” between questions and answers. The main limitation of these approaches is that they require semistructured training data in the form of question-answer pairs, which is difficult to obtain in specialized domains or lowresource languages. We propose two inexpensive methods for training alignment models solely using free text, by generating artificial question-answer pairs from discourse structures. Our approach is driven by two representations of discourse: a shallow sequential representation, and a deep one based on Rhetorical Structure Theory. We evaluate the proposed model on two corpora from different genres and domains: one from Yahoo! Answers and one from the biology domain, and two types of non-factoid questions: manner and reason. We show that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data.", "title": "" }, { "docid": "49fed572de904ac3bb9aab9cdc874cc6", "text": "Factorized Hidden Layer (FHL) adaptation has been proposed for speaker adaptation of deep neural network (DNN) based acoustic models. In FHL adaptation, a speaker-dependent (SD) transformation matrix and an SD bias are included in addition to the standard affine transformation. The SD transformation is a linear combination of rank-1 matrices whereas the SD bias is a linear combination of vectors. Recently, the Long ShortTerm Memory (LSTM) Recurrent Neural Networks (RNNs) have shown to outperform DNN acoustic models in many Automatic Speech Recognition (ASR) tasks. In this work, we investigate the effectiveness of SD transformations for LSTM-RNN acoustic models. Experimental results show that when combined with scaling of LSTM cell states’ outputs, SD transformations achieve 2.3% and 2.1% absolute improvements over the baseline LSTM systems for the AMI IHM and AMI SDM tasks respectively.", "title": "" }, { "docid": "80ddc34ac75a9d2f6b6fd59446a62243", "text": "Yuze Niu 1,2, Yacong Zhang 1,2,*, Zhuo Zhang 1,2, Miaomiao Fan 1, Wengao Lu 1,2 and Zhongjian Chen 1,2 1 Key Laboratory of Microelectronic Devices and Circuits, Department of Microelectronics, Peking University, Beijing 100871, China; yzniu@pku.edu.cn (Y.N.); zhangzhuo1658@163.com (Z.Z.); mayunfmm@163.com (M.F.); wglu@pku.edu.cn (W.L.); chenzj@pku.edu.cn (Z.C.) 2 Peking University Information Technology Institute (Tianjin Binhai), Tianjin 300452, China * Correspondence: zhangyc@pku.edu.cn", "title": "" }, { "docid": "efd6856e774b258858c43d7746639317", "text": "In this paper, we propose a vision-based robust vehicle distance estimation algorithm that supports motorists to rapidly perceive relative distance of oncoming and passing vehicles thereby minimizing the risk of hazardous circumstances. And, as it is expected, the silhouettes of background stationary objects may appear in the motion scene, which pop-up due to motion of the camera, which is mounted on dashboard of the host vehicle. To avoid the effect of false positive detection of stationary objects and to determine the ego motion a new Morphological Strip Matching Algorithm and Recursive Stencil Mapping Algorithm(MSM-RSMA)is proposed. A new series of stencils are created where non-stationary objects are taken off after detecting stationary objects by applying a shape matching technique to each image strip pair. Then the vertical shift is estimated recursively with new stencils with identified stationary background objects. Finally, relative comparison of known templates are used to estimate the distance, which is further certified by value obtained for vertical shift. We apply analysis of relative dimensions of bounding box of the detected vehicle with relevant templates to calculate the relative distance. We prove that our method is capable of providing a comparatively fast distance estimation while keeping its robustness in different environments changes.", "title": "" }, { "docid": "42b0c0c340cfb49e1eb7c07e8f251f94", "text": "The fisheries sector in the course of the last three decades have been transformed from a developed country to a developing country dominance. Aquaculture, the farming of waters, though a millennia old tradition during this period has become a significant contributor to food fish production, currently accounting for nearly 50 % of global food fish consumption; in effect transforming our dependence from a hunted to a farmed supply as for all our staple food types. Aquaculture and indeed the fisheries sector as a whole is predominated in the developing countries, and accordingly the development strategies adopted by the sector are influenced by this. Aquaculture also being a newly emerged food production sector has being subjected to an increased level of public scrutiny, and one of the most contentious aspects has been its impacts on biodiversity. In this synthesis an attempt is made to assess the impacts of aquaculture on biodiversity. Instances of major impacts on biodiversity conservation arising from aquaculture, such as land use, effluent discharge, effects on wild populations, alien species among others are highlighted and critically examined. The influence of paradigm changes in development strategies and modern day market forces have begun to impact on aquaculture developments. Consequently, improvements in practices and adoption of more environmentally friendly approaches that have a decreasing negative influence on biodiversity conservation are highlighted. An attempt is also made to demonstrate direct and or indirect benefits of aquaculture, such as through being a substitute to meet human needs for food, particularly over-exploited and vulnerable fish stocks, and for other purposes (e.g. medicinal ingredients), on biodiversity conservation, often a neglected entity.", "title": "" }, { "docid": "03368de546daf96d5111325f3d08fd3d", "text": "Despite the widespread use of social media by students and its increased use by instructors, very little empirical evidence is available concerning the impact of social media use on student learning and engagement. This paper describes our semester-long experimental study to determine if using Twitter – the microblogging and social networking platform most amenable to ongoing, public dialogue – for educationally relevant purposes can impact college student engagement and grades. A total of 125 students taking a first year seminar course for pre-health professional majors participated in this study (70 in the experimental group and 55 in the control group). With the experimental group, Twitter was used for various types of academic and co-curricular discussions. Engagement was quantified by using a 19-item scale based on the National Survey of Student Engagement. To assess differences in engagement and grades, we used mixed effects analysis of variance (ANOVA) models, with class sections nested within treatment groups. We also conducted content analyses of samples of Twitter exchanges. The ANOVA results showed that the experimental group had a significantly greater increase in engagement than the control group, as well as higher semester grade point averages. Analyses of Twitter communications showed that students and faculty were both highly engaged in the learning process in ways that transcended traditional classroom activities. This study provides experimental evidence that Twitter can be used as an educational tool to help engage students and to mobilize faculty into a more active and participatory role.", "title": "" }, { "docid": "01cb25375745cd8fdc6d2a546910acb4", "text": "Digital technology innovations have led to significant changes in everyday life, made possible by the widespread use of computers and continuous developments in information technology (IT). Based on the utilization of systems applying 3D(three-dimensional) technology, as well as virtual and augmented reality techniques, IT has become the basis for a new fashion industry model, featuring consumer-centered service and production methods. Because of rising wages and production costs, the fashion industry’s international market power has been significantly weakened in recent years. To overcome this situation, new markets must be established by building a new knowledge and technology-intensive fashion industry. Development of virtual clothing simulation software, which has played an important role in the fashion industry’s IT-based digitalization, has led to continuous technological improvements for systems that can virtually adapt existing 2D(two-dimensional) design work to 3D design work. Such adaptions have greatly influenced the fashion industry by increasing profits. Both here and abroad, studies have been conducted to support the development of consumercentered, high value-added clothing and fashion products by employing digital technology. This study proposes a system that uses a depth camera to capture the figure of a user standing in front of a large display screen. The display can show fashion concepts and various outfits to the user, coordinated to his or her body. Thus, a “magic mirror” effect is produced. Magic mirror-based fashion apparel simulation can support total fashion coordination for accessories and outfits automatically, and does not require computer or fashion expertise. This system can provide convenience for users by assuming the role of a professional fashion coordinator giving an appearance presentation. It can also be widely used to support a customized method for clothes shopping.", "title": "" }, { "docid": "96f0fc6d4a34d453b3debf579051761d", "text": "Feature construction is of vital importance in reinforcement learning, as the quality of a value function or policy is largely determined by the corresponding features. The recent successes of deep reinforcement learning (RL) only increase the importance of understanding feature construction. Typical deep RL approaches use a linear output layer, which means that deep RL can be interpreted as a feature construction/encoding network followed by linear value function approximation. This paper develops and evaluates a theory of linear feature encoding. We extend theoretical results on feature quality for linear value function approximation from the uncontrolled case to the controlled case. We then develop a supervised linear feature encoding method that is motivated by insights from linear value function approximation theory, as well as empirical successes from deep RL. The resulting encoder is a surprisingly effective method for linear value function approximation using raw images as inputs.", "title": "" }, { "docid": "42f58a4f6d67e76b7fb584624b8c064c", "text": "The cuckoo search algorithm (CS) is a simple and effective global optimization algorithm. It has been applied to solve a wide range of real-world optimization problem. In this paper, the proposed method uses two new mutation rules based on the rand and best individuals among the entire population. In order to balance the exploitation and exploration of the algorithm, the new rules are combined through a linear decreasing probability rule. Then, self adaptive parameter setting is introduced as a uniform random value to enhance the diversity of the population based on the relative success number of the proposed two new parameters in the previous period. To verify the performance of SACS, 16 benchmark functions chosen from literature are employed. Experimental results indicate that the proposed method performs better than, or at least comparable to state-of-the-art methods from literature when considering the quality of the solutions obtained. In the last part, experiments have been conducted on Lorenz system and Chen system to estimate the parameters of these two chaotic systems. Simulation results further demonstrate the proposed method is very effective. 2014 Published by Elsevier Inc.", "title": "" }, { "docid": "03b7313e2a52dd16a2c3bf712b4a0a19", "text": "Recommender systems have explored a range of implicit feedback approaches to capture users’ current interests and preferences without intervention of users’ work. However, the problem of implicit feedback elicit negative feedback, because users mainly target information they want. Therefore, there have been few studies to test how effective negative implicit feedback is to personalize information. In this paper, we assess whether implicit negative feedback can be used to improve recommendation quality.", "title": "" }, { "docid": "ad9d3b13795f7708c634d23615f2dd35", "text": "We introduce a variational inference framework for training the Gaussian process latent variable model and thus performing Bayesian nonlinear dimensionality reduction. This method allows us to variationally integrate out the input variables of the Gaussian process and compute a lower bound on the exact marginal likelihood of the nonlinear latent variable model. The maximization of the variational lower bound provides a Bayesian training procedure that is robust to overfitting and can automatically select the dimensionality of the nonlinear latent space. We demonstrate our method on real world datasets. The focus in this paper is on dimensionality reduction problems, but the methodology is more general. For example, our algorithm is immediately applicable for training Gaussian process models in the presence of missing or uncertain inputs.", "title": "" }, { "docid": "0b357696dd2b68a7cef39695110e4e1b", "text": "Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.", "title": "" }, { "docid": "b0b7521bc1e0670899ac86bb8f8bb581", "text": "With the increasing use of low-voltage portable devices and growing requirements of functionalities embedded into such devices, efficient dc/dc conversion and power management techniques are needed. In this paper, an architecture for boosting extremely low voltages (about 100 mV) to the typical supply voltages of current integrated circuits is presented which is suitable for power harvesting applications too. Starting from a 120-mV supply voltage, the converter reaches an output voltage of 1.2 V, providing an output current of 220 μA and exhibiting a maximum power efficiency of about 30%. Along with the dc/dc converter, a power management circuit is presented, which can regulate the output voltage and improve the overall efficiency. A test chip was fabricated using a United Microelectronics Corporation 180-nm low-threshold CMOS process.", "title": "" }, { "docid": "6bdb93e9bd59dbbf5dfd63b7343c816f", "text": "Finding the best matching job offers for a candidate profile or, the best candidates profiles for a particular job offer, respectively constitutes the most common and most relevant type of queries in the Human Resources sector. This technically requires to investigate top-k queries on top of knowledge bases and relational databases. We propose in this paper a top-k query algorithm on relational databases able to produce effective and efficient results. The approach is to consider the partial order of matching relations between jobs and candidates profiles together with an efficient design of the data involved. In particular, the focus on a single relation, the matching relation, is crucial to achieve the expectations.", "title": "" }, { "docid": "47f2a5a61677330fc85ff6ac700ac39f", "text": "We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.", "title": "" }, { "docid": "bb03f3bac93847d8a09b5a6dddc77148", "text": "We participate in the two event extraction tasks of BioNLP 2016 Shared Task: binary relation extraction of SeeDev task and localization relations extraction of Bacteria Biotope task. Convolutional neural network (CNN) is employed to model the sentences by convolution and maxpooling operation from raw input with word embedding. Then, full connected neural network is used to learn senior and significant features automatically. The proposed model mainly contains two modules: distributive semantic representation building, such as word embedding, POS embedding, distance embedding and entity type embedding, and CNN model training. The results with F-score of 0.370 and 0.478 in our participant tasks, which were evaluated on the test data set, show that our proposed method contributes to binary relation extraction effectively and can reduce the impact of artificial feature engineering through automatically feature learning.", "title": "" }, { "docid": "fab47ba2ca0b1fe26ae4aa11f7be4450", "text": "Matrix approximation is a common tool in recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local lowrank modeling. Our experiments show improvements in prediction accuracy over classical approaches for recommendation tasks.", "title": "" }, { "docid": "52c40ec5f1cdd037933f838fd59707a6", "text": "The Advanced-ANPC converter is designed to withstand almost every possible short-circuit failure of the semiconductors in comparison to state-of-the-art (A)NPC converters. The idea behind this concept is an additional short-circuit inductor to improve the robustness against faults. The presented investigations within this paper examine all the possible short-circuit situations and their effects for an (A)ANPC converter with high-voltage IGBTs.", "title": "" } ]
scidocsrr
63245751faa3456ef2a465d6046db9e4
Effective named entity recognition for idiosyncratic web collections
[ { "docid": "5609709136a45f355f988a7a4ec7857c", "text": "Traditional information extraction systems have focused on satisfying precise, narrow, pre-specified requests from small, homogeneous corpora. In contrast, the TextRunner system demonstrates a new kind of information extraction, called Open Information Extraction (OIE), in which the system makes a single, data-driven pass over the entire corpus and extracts a large set of relational tuples, without requiring any human input. (Banko et al., 2007) TextRunner is a fullyimplemented, highly scalable example of OIE. TextRunner’s extractions are indexed, allowing a fast query mechanism. Our first public demonstration of the TextRunner system shows the results of performing OIE on a set of 117 million web pages. It demonstrates the power of TextRunner in terms of the raw number of facts it has extracted, as well as its precision using our novel assessment mechanism. And it shows the ability to automatically determine synonymous relations and objects using large sets of extractions. We have built a fast user interface for querying the results.", "title": "" } ]
[ { "docid": "5cd29a37f0357aa242244aef4d12a87d", "text": "LEARNING OBJECTIVES\nAfter studying this article, the participant should be able to: 1. Describe the alternatives for auricular reconstruction. 2. Discuss the pros and cons of autogenous reconstruction of total or subtotal auricular defects. 3. Enumerate the indications for prosthetic reconstruction of total or subtotal auricular defects. 4. Understand the complexity of and the expertise required for prosthetic reconstruction of auricular defects. The indications for autogenous auricular reconstruction versus prosthetic reconstruction with osseointegrated implant-retained prostheses were outlined in Plastic and Reconstructive Surgery in 1994 by Wilkes et al. of Canada, but because of the relatively recent Food and Drug Administration approval (1995) of extraoral osseointegrated implants, these indications had not been examined by a surgical unit in the United States. The purpose of this article is to present an evolving algorithm based on an experience with 98 patients who underwent auricular reconstruction over a 10-year period. From this experience, the authors conclude that autogenous reconstruction is the procedure of choice in the majority of pediatric patients with microtia. Prosthetic reconstruction of the auricle is considered in such pediatric patients with congenital deformities for the following three relative indications: (1) failed autogenous reconstruction, (2) severe soft-tissue/skeletal hypoplasia, and/or (3) a low or unfavorable hairline. A fourth, and in our opinion the ideal, indication for prosthetic ear reconstruction is the acquired total or subtotal auricular defect, most often traumatic or ablative in origin, which is usually encountered in adults. Although prosthetic reconstruction requires surgical techniques that are less demanding than autogenous reconstruction, construction of the prosthesis is a time-consuming task requiring experience and expertise. Although autogenous reconstruction presents a technical challenge to the surgeon, it is the prosthetic reconstruction that requires lifelong attention and may be associated with late complications. This article reports the first American series of auricular reconstruction containing both autogenous and prosthetic methods by a single surgical team.", "title": "" }, { "docid": "a60a60a345fed5e16df157ebf2951c3f", "text": "A dielectric fibre with a refractive index higher than its surrounding region is a form of dielectric waveguide which represents a possible medium for the guided transmission of energy at optical frequencies. The particular type of dielectric-fibre waveguide discussed is one with a circular cross-section. The choice of the mode of propagation for a fibre waveguide used for communication purposes is governed by consideration of loss characteristics and information capacity. Dielectric loss, bending loss and radiation loss are discussed, and mode stability, dispersion and power handling are examined with respect to information capacity. Physicalrealisation aspects are also discussed. Experimental investigations at both optical and microwave wavelengths are included. List of principle symbols Jn = nth-order Bessel function of the first kind Kn = nth-order modified Bessel function of the second kind 271 271 B — —, phase coefficient of the waveguide Xg }'n = first derivative of Jn K ,̂ = first derivative of Kn hi = radial wavenumber or decay coefficient €,= relative permittivity k0 = free-space propagation coefficient a = radius of the fibre y = longitudinal propagation coefficient k = Boltzman's constant T = absolute temperature, K j5 c = isothermal compressibility X = wavelength n = refractive index Hj, = uth-order Hankel function of the ith type H'v = derivation of Hu v = azimuthal propagation coefficient = i^ — jv2 L = modulation period Subscript n is an integer and subscript m refers to the mth root of L = 0", "title": "" }, { "docid": "70509b891a45c8cdd0f2ed02207af06f", "text": "This paper presents an algorithm for drawing a sequence of graphs online. The algorithm strives to maintain the global structure of the graph and, thus, the user's mental map while allowing arbitrary modifications between consecutive layouts. The algorithm works online and uses various execution culling methods in order to reduce the layout time and handle large dynamic graphs. Techniques for representing graphs on the GPU allow a speedup by a factor of up to 17 compared to the CPU implementation. The scalability of the algorithm across GPU generations is demonstrated. Applications of the algorithm to the visualization of discussion threads in Internet sites and to the visualization of social networks are provided.", "title": "" }, { "docid": "26b38a6dc48011af80547171a9f3ecbd", "text": "This work addresses two classification problems that fall under the heading of domain adaptation, wherein the distributions of training and testing examples differ. The first problem studied is that of class proportion estimation, which is the problem of estimating the class proportions in an unlabeled testing data set given labeled examples of each class. Compared to previous work on this problem, our approach has the novel feature that it does not require labeled training data from one of the classes. This property allows us to address the second domain adaptation problem, namely, multiclass anomaly rejection. Here, the goal is to design a classifier that has the option of assigning a “reject” label, indicating that the instance did not arise from a class present in the training data. We establish consistent learning strategies for both of these domain adaptation problems, which to our knowledge are the first of their kind. We also implement the class proportion estimation technique and demonstrate its performance on several benchmark data sets.", "title": "" }, { "docid": "10abe464698cf38cce7df46718dfa83c", "text": "We have developed an approach using Bayesian networks to predict protein-protein interactions genome-wide in yeast. Our method naturally weights and combines into reliable predictions genomic features only weakly associated with interaction (e.g., messenger RNAcoexpression, coessentiality, and colocalization). In addition to de novo predictions, it can integrate often noisy, experimental interaction data sets. We observe that at given levels of sensitivity, our predictions are more accurate than the existing high-throughput experimental data sets. We validate our predictions with TAP (tandem affinity purification) tagging experiments. Our analysis, which gives a comprehensive view of yeast interactions, is available at genecensus.org/intint.", "title": "" }, { "docid": "a27c96091d6d806b05730e76377927e0", "text": "Visual priming is known to affect the human visual system to allow detection of scene elements, even those that may have been near unnoticeable before, such as the presence of camouflaged animals. This process has been shown to be an effect of top-down signaling in the visual system triggered by the said cue. In this paper, we propose a mechanism to mimic the process of priming in the context of object detection and segmentation. We view priming as having a modulatory, cue dependent effect on layers of features within a network. Our results show how such a process can be complementary to, and at times more effective than simple post-processing applied to the output of the network, notably so in cases where the object is hard to detect such as in severe noise, small size or atypical appearance. Moreover, we find the effects of priming are sometimes stronger when early visual layers are affected. Overall, our experiments confirm that top-down signals can go a long way in improving object detection and segmentation.", "title": "" }, { "docid": "e6d05a96665c2651c0b31f1bff67f04d", "text": "Detecting the neural processes like axons and dendrites needs high quality SEM images. This paper proposes an approach using perceptual grouping via a graph cut and its combinations with Convolutional Neural Network (CNN) to achieve improved segmentation of SEM images. Experimental results demonstrate improved computational efficiency with linear running time.", "title": "" }, { "docid": "81a45cb4ca02c38839a81ad567eb1491", "text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.", "title": "" }, { "docid": "4ae0a359405b8eb870bb5c447667cdc2", "text": "The aim of the study was to describe mentoring profile, correlate mentoring profile with mentoring effectiveness and career related outcomes. Cross sectional descriptive research with key informant interviews and survey as data collection procedures of all colleges of pharmacy in the NCR were employed for the studies. There were 13 deans, 80 junior faculty members and 34 identified mentors that participated in the study at 89.4% total response rate. Majority of relationships were between a junior and senior faculty member occurring in an informal and unstructured way. The benefits of mentoring relationships were higher percentage of research involvement, higher frequency of administrative positions, and more career related outcomes. The regression equation created from the analysis are [mentoring effectiveness score = 6.16 + 0.45 (cultivation phase) + 0.48 (formal mentoring program)] and [career related outcomes = 0.31 (mentoring effectiveness) – 0.08.]. The author recommends the creation of institutionalized formal mentoring programs that include characteristics of the program correlated to positive results.", "title": "" }, { "docid": "4c39b9a4e9822fb6d0a000c55d71faa5", "text": "Suicidal decapitation is seldom encountered in forensic medicine practice. This study reports the analysis of a suicide committed by a 31-year-old man with a self-fabricated guillotine. The construction of the guillotine was very interesting and sophisticated. The guillotine-like blade with additional weight was placed in a large metal frame. The movement of the blade was controlled by the frame rails. The steel blade was triggered by a tensioned rubber band after releasing the safety catch. The cause of death was immediate exsanguination after complete severance of the neck. The suicide motive was most likely emotional distress after the death of his father. In medico-legal literature, there has been only one similar case of suicidal complete decapitation by a guillotine described.", "title": "" }, { "docid": "782eaf93618c0e6b066519459bcdbdad", "text": "A model based on strikingly different philosophical as. sumptions from those currently popular is proposed for the design of online subject catalog access. Three design principles are presented and discussed: uncertainty (subject indexing is indeterminate and probabilis-tic beyond a certain point), variety (by Ashby's law of requisite variety, variety of searcher query must equal variety of document indexing), and complexity (the search process, particularly during the entry and orientation phases, is subtler and more complex, on several grounds, than current models assume). Design features presented are an access phase, including entry and orientation , a hunting phase, and a selection phase. An end-user thesaurus and a front-end system mind are presented as examples of online catalog system components to improve searcher success during entry and orientation. The proposed model is \" wrapped around \" existing Library of Congress subject-heading indexing in such a way as to enhance access greatly without requiring reindexing. It is argued that both for cost reasons and in principle this is a superior approach to other design philosophies .", "title": "" }, { "docid": "cfe5d769b9d479dccd543f8a4d23fcf9", "text": "This paper aims to describe the role of advanced sensing systems in the electric grid of the future. In detail, the project, development, and experimental validation of a smart power meter are described in the following. The authors provide an outline of the potentialities of the sensing systems and IoT to monitor efficiently the energy flow among nodes of an electric network. The described power meter uses the metrics proposed in the IEEE Standard 1459–2010 to analyze and process voltage and current signals. Information concerning the power consumption and power quality could allow the power grid to route efficiently the energy by means of more suitable decision criteria. The new scenario has changed the way to exchange energy in the grid. Now, energy flow must be able to change its direction according to needs. Energy cannot be now routed by considering just only the criterion based on the simple shortening of transmission path. So, even energy coming from a far node should be preferred, if it has higher quality standards. In this view, the proposed smart power meter intends to support the smart power grid to monitor electricity among different nodes in an efficient and effective way.", "title": "" }, { "docid": "f645a4dc6d3eba8536dac317770f43c6", "text": "We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. Intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps.", "title": "" }, { "docid": "4f7fdd852f520f6928eeb69b3d0d1632", "text": "Hadoop MapReduce is a popular framework for distributed storage and processing of large datasets and is used for big data analytics. It has various configuration parameters which play an important role in deciding the performance i.e., the execution time of a given big data processing job. Default values of these parameters do not result in good performance and therefore it is important to tune them. However, there is inherent difficulty in tuning the parameters due to two important reasons - first, the parameter search space is large and second, there are cross-parameter interactions. Hence, there is a need for a dimensionality-free method which can automatically tune the configuration parameters by taking into account the cross-parameter dependencies. In this paper, we propose a novel Hadoop parameter tuning methodology, based on a noisy gradient algorithm known as the simultaneous perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the selected parameters by directly observing the performance of the Hadoop MapReduce system. The approach followed is independent of parameter dimensions and requires only 2 observations per iteration while tuning. We demonstrate the effectiveness of our methodology in achieving good performance on popular Hadoop benchmarks namely Grep, Bigram, Inverted Index, Word Co-occurrence and Terasort. Our method, when tested on a 25 node Hadoop cluster shows 45-66% decrease in execution time of Hadoop jobs on an average, when compared to prior methods. Further, our experiments also indicate that the parameters tuned by our method are resilient to changes in number of cluster nodes, which makes our method suitable to optimize Hadoop when it is provided as a service on the cloud.", "title": "" }, { "docid": "5588fd19a3d0d73598197ad465315fd6", "text": "The growing need for Chinese natural language processing (NLP) is largely in a range of research and commercial applications. However, most of the currently Chinese NLP tools or components still have a wide range of issues need to be further improved and developed. FudanNLP is an open source toolkit for Chinese natural language processing (NLP), which uses statistics-based and rule-based methods to deal with Chinese NLP tasks, such as word segmentation, part-ofspeech tagging, named entity recognition, dependency parsing, time phrase recognition, anaphora resolution and so on.", "title": "" }, { "docid": "745cdbb442c73316f691dc20cc696f31", "text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.", "title": "" }, { "docid": "f1c1a0baa9f96d841d23e76b2b00a68d", "text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08", "title": "" }, { "docid": "82865170278997209a650aa8be483703", "text": "This paper presents a novel dataset for traffic accidents analysis. Our goal is to resolve the lack of public data for research about automatic spatio-temporal annotations for traffic safety in the roads. Through the analysis of the proposed dataset, we observed a significant degradation of object detection in pedestrian category in our dataset, due to the object sizes and complexity of the scenes. To this end, we propose to integrate contextual information into conventional Faster R-CNN using Context Mining (CM) and Augmented Context Mining (ACM) to complement the accuracy for small pedestrian detection. Our experiments indicate a considerable improvement in object detection accuracy: +8.51% for CM and +6.20% for ACM. Finally, we demonstrate the performance of accident forecasting in our dataset using Faster R-CNN and an Accident LSTM architecture. We achieved an average of 1.684 seconds in terms of Time-To-Accident measure with an Average Precision of 47.25%. Our Webpage for the paper is https:", "title": "" }, { "docid": "adc587c3400cdf927c433e9d0f929894", "text": "With continuous increase in urban population, the need to plan and implement smart cities based solutions for better urban governance is becoming more evident. These solutions are driven, on the one hand, by innovations in ICT and, on the other hand, to increase the capability and capacity of cities to mitigate environmental, social inclusion, economic growth and sustainable development challenges. In this respect, citizens' science or public participation provides a key input for informed and intelligent planning decision and policy making. However, the challenge here is to facilitate public in acquiring the right contextual information in order to be more productive, innovative and be able to make appropriate decisions which impact on their well being, in particular, and economic and environmental sustainability in general. Such a challenge requires contemporary ICT solutions, such as using Cloud computing, capable of storing and processing significant amount of data and produce intelligent contextual information. However, processing and visualising contextual information in a Cloud environment is not straightforward due to user profiling and contextual segregation of data that could be used in different applications of a smart city. In this regard, we present a Cloud-based architecture for context-aware citizen services for smart cities and walkthrough it using a hypothetical case study.", "title": "" }, { "docid": "073b17e195cec320c20533f154d4ab7f", "text": "Automatic segmentation of cell nuclei is an essential step in image cytometry and histometry. Despite substantial progress, there is a need to improve accuracy, speed, level of automation, and adaptability to new applications. This paper presents a robust and accurate novel method for segmenting cell nuclei using a combination of ideas. The image foreground is extracted automatically using a graph-cuts-based binarization. Next, nuclear seed points are detected by a novel method combining multiscale Laplacian-of-Gaussian filtering constrained by distance-map-based adaptive scale selection. These points are used to perform an initial segmentation that is refined using a second graph-cuts-based algorithm incorporating the method of alpha expansions and graph coloring to reduce computational complexity. Nuclear segmentation results were manually validated over 25 representative images (15 in vitro images and 10 in vivo images, containing more than 7400 nuclei) drawn from diverse cancer histopathology studies, and four types of segmentation errors were investigated. The overall accuracy of the proposed segmentation algorithm exceeded 86%. The accuracy was found to exceed 94% when only over- and undersegmentation errors were considered. The confounding image characteristics that led to most detection/segmentation errors were high cell density, high degree of clustering, poor image contrast and noisy background, damaged/irregular nuclei, and poor edge information. We present an efficient semiautomated approach to editing automated segmentation results that requires two mouse clicks per operation.", "title": "" } ]
scidocsrr
0d830d61d9a60c855d5fca182121267a
mD3DOCKxb: An Ultra-Scalable CPU-MIC Coordinated Virtual Screening Framework
[ { "docid": "c466e5b11908d6cd5bd230d32bf3140e", "text": "ZINC is a free public resource for ligand discovery. The database contains over twenty million commercially available molecules in biologically relevant representations that may be downloaded in popular ready-to-dock formats and subsets. The Web site also enables searches by structure, biological activity, physical property, vendor, catalog number, name, and CAS number. Small custom subsets may be created, edited, shared, docked, downloaded, and conveyed to a vendor for purchase. The database is maintained and curated for a high purchasing success rate and is freely available at zinc.docking.org.", "title": "" } ]
[ { "docid": "83e087b566c342b500cdaba6584388b5", "text": "In this paper, a design method of Ka-band Wilkinson power divider based on the detail model of real lumped chip resistor is presented, and this model is got from its geometric size and electric property. The structure of this power divider and the formulas used to determine the design parameters have been given. Experimental results show that the insertion loss is about 4.7 dB with return loss below -13 dB, and the isolation between two output ports is better than -25 dB over 28.3 GHz~33.8 GHz.", "title": "" }, { "docid": "bbb91e336f0125c0e8a0358f6afc9ef1", "text": "In this paper, we study a new learning paradigm for neural machine translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as AdversarialNMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed 2D convolutional neural network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English→French and German→English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.", "title": "" }, { "docid": "90414004f8681198328fb48431a34573", "text": "Process models play important role in computer aided process engineering. Although the structure of these models are a priori known, model parameters should be estimated based on experiments. The accuracy of the estimated parameters largely depends on the information content of the experimental data presented to the parameter identification algorithm. Optimal experiment design (OED) can maximize the confidence on the model parameters. The paper proposes a new additive sequential evolutionary experiment design approach to maximize the amount of information content of experiments. The main idea is to use the identified models to design new experiments to gradually improve the model accuracy while keeping the collected information from previous experiments. This scheme requires an effective optimization algorithm, hence the main contribution of the paper is the incorporation of Evolutionary Strategy (ES) into a new iterative scheme of optimal experiment design (AS-OED). This paper illustrates the applicability of AS-OED for the design of feeding profile for a fed-batch biochemical reactor.", "title": "" }, { "docid": "59b26acc158c728cf485eae27de665f7", "text": "The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations.", "title": "" }, { "docid": "dd956cadc4158b6529cca0966c446845", "text": "One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.", "title": "" }, { "docid": "3c444d8918a31831c2dc73985d511985", "text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.", "title": "" }, { "docid": "2316c2c0115dd0d59f5a0a3c44a246d7", "text": "Today's organizations are highly dependent on information management and processes. Information security is one of the top issues for researchers and practitioners. In literature, there is consent that employees are the weakest link in IS security. A variety of researchers discuss explanations for employees' security related awareness and behavior. This paper presents a theory-based literature review of the extant approaches used within employees' information security awareness and behavior research over the past decade. In total, 113 publications were identified and analyzed. The information security research community covers 54 different theories. Focusing on the four main behavioral theories, a state-of-the-art overview of employees' security awareness and behavior research over the past decade is given. From there, gaps in existing research are uncovered and implications and recommendations for future research are discussed. The literature review might also be useful for practitioners that need information about behavioral factors that are critical to the success of a organization's security awareness.", "title": "" }, { "docid": "63d4b76933b19c50796f7ffa455334d3", "text": "Darwin envisioned a scientific revolution for psychology. His theories of natural and sexual selection identified two classes of struggles--the struggle for existence and the struggle for mates. The emergence of evolutionary psychology and related disciplines signals the fulfillment of Darwin's vision. Natural selection theory guides scientists to discover adaptations for survival. Sexual selection theory illuminates the sexual struggle, highlighting mate choice and same-sex competition adaptations. Theoretical developments since publication of On the Origin of Species identify important struggles unknown to Darwin, notably, within-families conflicts and conflict between the sexes. Evolutionary psychology synthesizes modern evolutionary biology and psychology to penetrate some of life's deep mysteries: Why do many struggles center around sex? Why is social conflict pervasive? And what are the mechanisms of mind that define human nature?", "title": "" }, { "docid": "7f09bdd6a0bcbed0d9525c5d20cf8cbb", "text": "Distributed are increasing being thought of as a platform for decentralised applications — DApps — and the the focus for many is shifting from Bitcoin to Smart Contracts. It’s thought that encoding contracts and putting them “on the blockchain” will result in a new generation of organisations that are leaner and more efficient than their forebears (“Capps”?”), disrupting these forebears in the process. However, the most interesting aspect of Bitcoin and blockchain is that it involved no new technology, no new math. Their emergence was due to changes in the environment: the priceperformance and penetration of broadband networks reached a point that it was economically viable for a decentralised solution, such as Bitcoin to compete with traditional payment (international remittance) networks. This is combining with another trend — the shift from monolithic firms to multi-sided markets such as AirBnb et al and the rise of “platform businesses” — to enable a new class of solution to emerge. These new solutions enable firms to interact directly, without the need for a facilitator such as a market, exchange, or even a blockchain. In the past these facilitators were firms. More recently they have been “platform businesses.” In the future they may not exist at all. The shift to a distributed environment enables us to reconsider many of the ideas from distributed AI and linked data. Where are the opportunities? How can we avoid the mistakes of the past?", "title": "" }, { "docid": "cf19d92ed609dcd5ee7e507dd5771c7e", "text": "A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.", "title": "" }, { "docid": "d0a6ca9838f8844077fdac61d1d75af1", "text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-", "title": "" }, { "docid": "cc570f3d281947d417cd8476af3cced9", "text": "This paper deals with the problem of fine-grained image classification and introduces the notion of hierarchical metric learning for the same. It is indeed challenging to categorize fine-grained image classes merely in terms of a single level classifier given the subtle inter-class visual differences. In order to tackle this problem, we propose a two stage framework where i) the image categories are represented hierarchically in terms of a binary tree structure where different subset of classes are present in the non-leaf nodes of the tree. This is accomplished in an automatic fashion considering the available training data in the visual domain, and ii) a (non-leaf) node specific metric learning is further deployed for the categories constituting a given node, thus enforcing better separation between both of its children. Subsequently, we construct (non-leaf) node specific binary classifiers in the learned metric spaces on which testing is henceforth carried out by following the outcomes of the classifiers sequence from root to leaf nodes of the tree. By separately focusing on the semantically similar classes at different levels of the hierarchy, it is expected that the classifiers in the learned metric spaces possess better discriminative capabilities than considering all the classes at a single go. Experimental results obtained on two challenging datasets (Oxford Flowers and Leeds Butterfly) establish the superiority of the proposed framework in comparison to the standard single metric learning based methods convincingly.", "title": "" }, { "docid": "247534c6b5416e4330a84e10daf2bc0c", "text": "The aim of the present study was to determine metabolic responses, movement patterns and distance covered at running speeds corresponding to fixed blood lactate concentrations (FBLs) in young soccer players during a match play. A further aim of the study was to evaluate the relationships between FBLs, maximal oxygen consumption (VO2max) and distance covered during a game. A multistage field test was administered to 32 players to determine FBLs and VO2max. Blood lactate (LA), heart rate (HR) and rate of perceived exertion (RPE) responses were obtained from 36 players during tournament matches filmed using six fixed cameras. Images were transferred to a computer, for calibration and synchronization. In all players, values for LA and HR were higher and RPE lower during the 1(st) half compared to the 2(nd) half of the matches (p < 0.01). Players in forward positions had higher LA levels than defenders, but HR and RPE values were similar between playing positions. Total distance and distance covered in jogging, low-moderate-high intensity running and low intensity sprint were higher during the 1(st) half (p < 0.01). In the 1(st) half, players also ran longer distances at FBLs [p<0.01; average running speed at 2mmol·L(-1) (FBL2): 3.32 ± 0.31m·s(-1) and average running speed at 4mmol·L(-1) (FBL4): 3.91 ± 0.25m·s(-1)]. There was a significant difference between playing positions in distance covered at different running speeds (p < 0.05). However, when distance covered was expressed as FBLs, the players ran similar distances. In addition, relationships between FBLs and total distance covered were significant (r = 0.482 to 0.570; p < 0.01). In conclusion, these findings demonstrated that young soccer players experienced higher internal load during the 1(st) half of a game compared to the 2(nd) half. Furthermore, although movement patterns of players differed between playing positions, all players experienced a similar physiological stress throughout the game. Finally, total distance covered was associated to fixed blood lactate concentrations during play. Key pointsBased on LA, HR and RPE responses, young top soccer players experienced a higher physiological stress during the 1(st) half of the matches compared to the 2(nd) half.Movement patterns differed in accordance with the players' positions but that all players experienced a similar physiological stress during match play.Approximately one quarter of total distance was covered at speeds that exceeded the 4 mmol·L(-1) fixed LA threshold.Total distance covered was influenced by running speeds at fixed lactate concentrations in young soccer players during match play.", "title": "" }, { "docid": "10124ea154b8704c3a6aaec7543ded57", "text": "Tomato bacterial wilt and canker, caused by Clavibacter michiganensis subsp. michiganensis (Cmm) is considered one of the most important bacterial diseases of tomato worldwide. During the last two decades, severe outbreaks have occurred in greenhouses in the horticultural belt of Buenos Aires-La Plata, Argentina. Cmm strains collected in this area over a period of 14 years (2000–2013) were characterized for genetic diversity by rep-PCR genomic fingerprinting and level of virulence in order to have a better understanding of the source of inoculum and virulence variability. Analyses of BOX-, ERIC- and REP-PCR fingerprints revealed that the strains were genetically diverse; the same three fingerprint types were obtained in all three cases. No relationship could be established between rep-PCR clustering and the year, location or greenhouse origin of isolates, which suggests different sources of inoculum. However, in a few cases, bacteria with identical fingerprint types were isolated from the same greenhouse in different years. Despite strains differing in virulence, particularly within BOX-PCR groups, putative virulence genes located in plasmids (celA, pat-1) or in a pathogenicity island in the chromosome (tomA, chpC, chpG and ppaA) were detected in all strains. Our results suggest that new strains introduced every year via seed importation might be coexisting with others persisting locally. This study highlights the importance of preventive measures to manage tomato bacterial wilt and canker.", "title": "" }, { "docid": "e49ea1a6aa8d7ffec9ca16ac18cfc43a", "text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M.", "title": "" }, { "docid": "d6379e449f1b7c6d845a004c59c1023c", "text": "Phase-shifted ZVS PWM full-bridge converter realizes ZVS and eliminates the voltage oscillation caused by the reverse recovery of the rectifier diodes by introducing a resonant inductance and two clamping diodes. This paper improves the converter just by exchanging the position of the resonant inductance and the transformer such that the transformer is connected with the lagging leg. The improved converter has several advantages over the original counterpart, e.g., the clamping diodes conduct only once in a switching cycle, and the resonant inductance current is smaller in zero state, leading to a higher efficiency and reduced duty cycle loss. A blocking capacitor is usually introduced to the primary side to prevent the transformer from saturating, this paper analyzes the effects of the blocking capacitor in different positions, and a best scheme is determined. A 2850 W prototype converter is built to verify the effectiveness of the improved converter and the best scheme for the blocking capacitor.", "title": "" }, { "docid": "271639e9eea6a47f3d80214517444072", "text": "The treatment of juvenile idiopathic arthritis (JIA) is evolving. The growing number of effective drugs has led to successful treatment and prevention of long-term sequelae in most patients. Although patients with JIA frequently achieve lasting clinical remission, sustained remission off medication is still elusive for most. Treatment approaches vary substantially among paediatric rheumatologists owing to the inherent heterogeneity of JIA and, until recently, to the lack of accepted and well-evidenced guidelines. Furthermore, many pertinent questions related to patient management remain unanswered, in particular regarding treatment targets, and selection, intensity and sequence of initiation or withdrawal of therapy. Existing JIA guidelines and recommendations do not specify treat-to-target or tight control strategies, in contrast to adult rheumatology in which these approaches have been successful. The concepts of window of opportunity (early treatment to improve long-term outcomes) and immunological remission (abrogation of subclinical disease activity) are also fundamental when defining treatment methodologies. This Review explores the application of these concepts to JIA and their possible contribution to the development of future clinical guidelines or consensus treatment protocols. The article also discusses how diverse forms of standardized, guideline-led care and personalized treatment can be combined into a targeted, patient-centred approach to optimize management strategies for patients with JIA.", "title": "" }, { "docid": "fb15c88052883b11a34d0911979c30a1", "text": "What explains patterns of compliance with and resistance to autocratic rule? This paper provides a theoretical framework for understanding how individuals living under dictatorship calibrate their political behaviors. I argue that the types of non-compliance observed in autocratic contexts differ depending on the intensity of expected punishment and the extent to which sanctions are directed at individuals, families or larger communities. Using data from documents captured by US forces during the 2003 invasion of Iraq, I use unanticipated political shocks to examine over-time discontinuities in citizen behavior in Iraq under Saddam Hussein during two distinct periods — before and after the First Gulf War and the associated Kurdish and Shi‘a anti-regime uprisings. Prior to 1991 and the establishment of a Kurdish autonomous zone in northern Iraq, severe repression and widespread use of collective punishment created the conditions for Iraqi Kurds to engage in a widespread anti-regime rebellion. Before 1991, Shi‘a Iraqis were able to express limited forms of political discontent; after 1991, however, Shi‘a were forced to publicly signal compliance while shifting to more private forms of anti-regime activity. While Iraqis living in and around Saddam Hussein’s hometown of Tikrit almost universally self-identified as Ba‘thists and enjoyed privileges as a result of close ties to the regime, Sunnis living in areas distant from Tikrit became increasingly estranged from the regime as international sanctions closed off economic opportunities. ∗Many thanks to the staff at the Library and Archives of the Hoover Institution and the W. Glenn Campbell and Rita Ricardo-Campbell National Fellows Program at the Hoover Institution.", "title": "" }, { "docid": "9653346c41cab4e22c9987586bb155c1", "text": "The focus of the great majority of climate change impact studies is on changes in mean climate. In terms of climate model output, these changes are more robust than changes in climate variability. By concentrating on changes in climate means, the full impacts of climate change on biological and human systems are probably being seriously underestimated. Here, we briefly review the possible impacts of changes in climate variability and the frequency of extreme events on biological and food systems, with a focus on the developing world. We present new analysis that tentatively links increases in climate variability with increasing food insecurity in the future. We consider the ways in which people deal with climate variability and extremes and how they may adapt in the future. Key knowledge and data gaps are highlighted. These include the timing and interactions of different climatic stresses on plant growth and development, particularly at higher temperatures, and the impacts on crops, livestock and farming systems of changes in climate variability and extreme events on pest-weed-disease complexes. We highlight the need to reframe research questions in such a way that they can provide decision makers throughout the food system with actionable answers, and the need for investment in climate and environmental monitoring. Improved understanding of the full range of impacts of climate change on biological and food systems is a critical step in being able to address effectively the effects of climate variability and extreme events on human vulnerability and food security, particularly in agriculturally based developing countries facing the challenge of having to feed rapidly growing populations in the coming decades.", "title": "" }, { "docid": "08b2de5f1c6356c988ac9d6f09ca9a31", "text": "Novel conditions are derived that guarantee convergence of the sum-product algorithm (also known as loopy belief propagation or simply belief propagation (BP)) to a unique fixed point, irrespective of the initial messages, for parallel (synchronous) updates. The computational complexity of the conditions is polynomial in the number of variables. In contrast with previously existing conditions, our results are directly applicable to arbitrary factor graphs (with discrete variables) and are shown to be valid also in the case of factors containing zeros, under some additional conditions. The conditions are compared with existing ones, numerically and, if possible, analytically. For binary variables with pairwise interactions, sufficient conditions are derived that take into account local evidence (i.e., single-variable factors) and the type of pair interactions (attractive or repulsive). It is shown empirically that this bound outperforms existing bounds.", "title": "" } ]
scidocsrr
2c20cee23a72443a07afab221e275b62
Framework for SCADA cyber-attack dataset creation
[ { "docid": "808115043786372af3e3fb726cc3e191", "text": "Scapy is a free and open source packet manipulation environment written in Python language. In this paper we present a Modbus extension to Scapy, and show how this environment can be used to build tools for security analysis of industrial network protocols. Our implementation can be extended to other industrial network protocols and can help security analysts to understand how these protocols work under attacks or adverse conditions.", "title": "" }, { "docid": "11ed7e0742ddb579efe6e1da258b0d3c", "text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.", "title": "" }, { "docid": "9734cfaecfbd54f968291e9154e2ab3d", "text": "The Modbus protocol and its variants are widely used in industrial control applications, especially for pipeline operations in the oil and gas sector. This paper describes the principal attacks on the Modbus Serial and Modbus TCP protocols and presents the corresponding attack taxonomies. The attacks are summarized according to their threat categories, targets and impact on control system assets. The attack taxonomies facilitate formal risk analysis efforts by clarifying the nature and scope of the security threats on Modbus control systems and networks. Also, they provide insights into potential mitigation strategies and the relative costs and benefits of implementing these strategies. c © 2008 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "d58110b3f449cb76c7327fb3da80d027", "text": "The subject of this paper is robust voice activity detection (VAD) in noisy environments, especially in car environments. We present a comparison between several frame based VAD feature extraction algorithms in combination with different classifiers. Experiments are carried out under equal test conditions using clean speech, clean speech with added car noise and speech recorded in car environments. The lowest error rate is achieved applying features based on a likelihood ratio test which assumes normal distribution of speech and noise and a perceptron classifier. We propose modifications of this algorithm which reduce the frame error rate by approximately 30% relative in our experiments compared to the original algorithm.", "title": "" }, { "docid": "5249a94aa9d9dbb211bb73fa95651dfd", "text": "Power and energy have become increasingly important concerns in the design and implementation of today's multicore/manycore chips. In this paper, we present two priority-based CPU scheduling algorithms, Algorithm Cache Miss Priority CPU Scheduler (CM-PCS) and Algorithm Context Switch Priority CPU Scheduler (CS-PCS), which take advantage of often ignored dynamic performance data, in order to reduce power consumption by over 20 percent with a significant increase in performance. Our algorithms utilize Linux cpusets and cores operating at different fixed frequencies. Many other techniques, including dynamic frequency scaling, can lower a core's frequency during the execution of a non-CPU intensive task, thus lowering performance. Our algorithms match processes to cores better suited to execute those processes in an effort to lower the average completion time of all processes in an entire task, thus improving performance. They also consider a process's cache miss/cache reference ratio, number of context switches and CPU migrations, and system load. Finally, our algorithms use dynamic process priorities as scheduling criteria. We have tested our algorithms using a real AMD Opteron 6134 multicore chip and measured results directly using the “KillAWatt” meter, which samples power periodically during execution. Our results show not only a power (energy/execution time) savings of 39 watts (21.43 percent) and 38 watts (20.88 percent), but also a significant improvement in the performance, performance per watt, and execution time · watt (energy) for a task consisting of 24 concurrently executing benchmarks, when compared to the default Linux scheduler and CPU frequency scaling governor.", "title": "" }, { "docid": "ab5f79671bcd56a733236b089bd5e955", "text": "Conversational modeling is an important task in natural language processing as well as machine learning. Like most important tasks, it’s not easy. Previously, conversational models have been focused on specific domains, such as booking hotels or recommending restaurants. They were built using hand-crafted rules, like ChatScript [11], a popular rule-based conversational model. In 2014, the sequence to sequence model being used for translation opened the possibility of phrasing dialogues as a translation problem: translating from an utterance to its response. The systems built using this principle, while conversing fairly fluently, aren’t very convincing because of their lack of personality and inconsistent persona [10] [5]. In this paper, we experiment building open-domain response generator with personality and identity. We built chatbots that imitate characters in popular TV shows: Barney from How I Met Your Mother, Sheldon from The Big Bang Theory, Michael from The Office, and Joey from Friends. A successful model of this kind can have a lot of applications, such as allowing people to speak with their favorite celebrities, creating more life-like AI assistants, or creating virtual alter-egos of ourselves. The model was trained end-to-end without any hand-crafted rules. The bots talk reasonably fluently, have distinct personalities, and seem to have learned certain aspects of their identity. The results of standard automated translation model evaluations yielded very low scores. However, we designed an evaluation metric with a human judgment element, for which the chatbots performed well. We are able to show that for a bot’s response, a human is more than 50% likely to believe that the response actually came from the real character. Keywords—Seq2seq, attentional mechanism, chatbot, dialogue system.", "title": "" }, { "docid": "e597f9fbd0d42066b991c6e917a1e767", "text": "While Open Data initiatives are diverse, they aim to create and contribute to public value. Yet several potential contradictions exist between public values, such as trust, transparency, privacy, and security, and Open Data policies. To bridge these contradictions, we present the notion of precommitment as a restriction of one’s choices. Conceptualized as a policy instrument, precommitment can be applied by an organization to restrict the extent to which an Open Data policy might conflict with public values. To illustrate the use of precommitment, we present two case studies at two public sector organizations, where precommitment is applied during a data request procedure to reconcile conflicting values. In this procedure, precommitment is operationalized in three phases. In the first phase, restrictions are defined on the type and the content of the data that might be requested. The second phase involves the preparation of the data to be delivered according to legal requirements and the decisions taken in phase 1. Data preparation includes amongst others the deletion of privacy sensitive or other problematic attributes. Finally, phase 3 pertains to the establishment of the conditions of reuse of the data, limiting the use to restricted user groups or opening the data for everyone.", "title": "" }, { "docid": "22a3d3ac774a5da4f165e90edcbd1666", "text": "One of the difficulties of neural machine translation (NMT) is the recall and appropriate translation of low-frequency words or phrases. In this paper, we propose a simple, fast, and effective method for recalling previously seen translation examples and incorporating them into the NMT decoding process. Specifically, for an input sentence, we use a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, and then collect n-grams that are both in the retrieved target sentences and aligned with words that match in the source sentences, which we call “translation pieces”. We compute pseudoprobabilities for each retrieved sentence based on similarities between the input sentence and the retrieved source sentences, and use these to weight the retrieved translation pieces. Finally, an existing NMT model is used to translate the input sentence, with an additional bonus given to outputs that contain the collected translation pieces. We show our method improves NMT translation results up to 6 BLEU points on three narrow domain translation tasks where repetitiveness of the target sentences is particularly salient. It also causes little increase in the translation time, and compares favorably to another alternative retrievalbased method with respect to accuracy, speed, and simplicity of implementation.", "title": "" }, { "docid": "f6540d23f09c8ee4b6a11187abe82112", "text": "We propose a visual analytics approach for the exploration and analysis of dynamic networks. We consider snapshots of the network as points in high-dimensional space and project these to two dimensions for visualization and interaction using two juxtaposed views: one for showing a snapshot and one for showing the evolution of the network. With this approach users are enabled to detect stable states, recurring states, outlier topologies, and gain knowledge about the transitions between states and the network evolution in general. The components of our approach are discretization, vectorization and normalization, dimensionality reduction, and visualization and interaction, which are discussed in detail. The effectiveness of the approach is shown by applying it to artificial and real-world dynamic networks.", "title": "" }, { "docid": "2891ce3327617e9e957488ea21e9a20c", "text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.", "title": "" }, { "docid": "01ddd5cf694df46a69341549f70529f8", "text": "The RiskTrack project aims to help in the prevention of terrorism through the identification of online radicalisation. In line with the European Union priorities in this matter, this project has been designed to identify and tackle the indicators that raise a red flag about which individuals or communities are being radicalised and recruited to commit violent acts of terrorism. Therefore, the main goals of this project will be twofold: On the one hand, it is needed to identify the main features and characteristics that can be used to evaluate a risk situation, to do that a risk assessment methodology studying how to detect signs of radicalisation (e.g., use of language, behavioural patterns in social networks...) will be designed. On the other hand, these features will be tested and analysed using advanced data mining methods, knowledge representation (semantic and ontology engineering) and multilingual technologies. The innovative aspect of this project is to not offer just a methodology on risk assessment, but also a tool that is build based on this methodology, so that the prosecutors, judges, law enforcement and other actors can obtain a short term tangible results.", "title": "" }, { "docid": "ba5b796721787105e48ad2794cfc11cc", "text": "Real world applications of machine learning in natural language processing can span many different domains and usually require a huge effort for the annotation of domain specific training data. For this reason, domain adaptation techniques have gained a lot of attention in the last years. In order to derive an effective domain adaptation, a good feature representation across domains is crucial as well as the generalisation ability of the predictive model. In this paper we address the problem of domain adaptation for sentiment classification by combining deep learning, for acquiring a cross-domain high-level feature representation, and ensemble methods, for reducing the cross-domain generalization error. The proposed adaptation framework has been evaluated on a benchmark dataset composed of reviews of four different Amazon category of products, significantly outperforming the state of the art methods.", "title": "" }, { "docid": "5c5f75a7dd8f3241346b45592fa60faa", "text": "Participants searched for discrepant fear-relevant pictures (snakes or spiders) in grid-pattern arrays of fear-irrelevant pictures belonging to the same category (flowers or mushrooms) and vice versa. Fear-relevant pictures were found more quickly than fear-irrelevant ones. Fear-relevant, but not fear-irrelevant, search was unaffected by the location of the target in the display and by the number of distractors, which suggests parallel search for fear-relevant targets and serial search for fear-irrelevant targets. Participants specifically fearful of snakes but not spiders (or vice versa) showed facilitated search for the feared objects but did not differ from controls in search for nonfeared fear-relevant or fear-irrelevant, targets. Thus, evolutionary relevant threatening stimuli were effective in capturing attention, and this effect was further facilitated if the stimulus was emotionally provocative.", "title": "" }, { "docid": "40ca946c3cd4c8617585c648de5ce883", "text": "Investigating the incidence, type, and preventability of adverse drug events (ADEs) and medication errors is crucial to improving the quality of health care delivery. ADEs, potential ADEs, and medication errors can be collected by extraction from practice data, solicitation of incidents from health professionals, and patient surveys. Practice data include charts, laboratory, prescription data, and administrative databases, and can be reviewed manually or screened by computer systems to identify signals. Research nurses, pharmacists, or research assistants review these signals, and those that are likely to represent an ADE or medication error are presented to reviewers who independently categorize them into ADEs, potential ADEs, medication errors, or exclusions. These incidents are also classified according to preventability, ameliorability, disability, severity, stage, and responsible person. These classifications, as well as the initial selection of incidents, have been evaluated for agreement between reviewers and the level of agreement found ranged from satisfactory to excellent (kappa = 0.32-0.98). The method of ADE and medication error detection and classification described is feasible and has good reliability. It can be used in various clinical settings to measure and improve medication safety.", "title": "" }, { "docid": "948e65673f679fe37027f4dc496397f8", "text": "Online courses are growing at a tremendous rate, and although we have discovered a great deal about teaching and learning in the online environment, there is much left to learn. One variable that needs to be explored further is procrastination in online coursework. In this mixed methods study, quantitative methods were utilized to evaluate the influence of online graduate students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Additionally, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Collectively, results indicated that ability, effort, context, and luck influenced procrastination in this sample of graduate students. A discussion of these findings, implications for instructors, and recommendations for future research ensues. Online course offerings and degree programs have recently increased at a rapid rate and have gained in popularity among students (Allen & Seaman, 2010, 2011). Garrett (2007) reported that half of prospective students surveyed about postsecondary programs expressed a preference for online and hybrid programs, typically because of the flexibility and convenience (Daymont, Blau, & Campbell, 2011). Advances in learning management systems such as Blackboard have facilitated the dramatic increase in asynchronous programs. Although the research literature concerning online learning has blossomed over the past decade, much is left to learn about important variables that impact student learning and achievement. The purpose of this mixed methods study was to better understand the relationship between online graduate students’ attributional beliefs and their tendency to procrastinate. The approach to this objective was twofold. First, quantitative methods were utilized to evaluate the influence of students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Second, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Journal of Interactive Online Learning Rakes, Dunn, and Rakes", "title": "" }, { "docid": "458d4e83692b4512e98d071f2c173f3d", "text": "The addition of two binary numbers is the basic and most often used arithmetic operation on microprocessors, digital signal processors and data processing application specific integrated circuits. Parallel prefix adder is a general technique for speeding up binary addition. This method implements logic functions which determine whether groups of bits will generate or propagate a carry. The proposed 64-bit adder is designed using four different types prefix cell operators, even-dot cells, odd-dot cells, even-semi-dot cells and odd-semi-dot cells; it offers robust adder solutions typically used for low power and high-performance design application needs. The comparison can be made with various input ranges of Parallel Prefix adders in terms power, number of transistor, number of nodes. Tanner EDA tool was used for simulating the parallel prefix adder designs in the 250nm technologies.", "title": "" }, { "docid": "48019a3106c6d74e4cfcc5ac596d4617", "text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.", "title": "" }, { "docid": "184da4d4589a3a9dc1f339042e6bc674", "text": "Ocular dominance plasticity has long served as a successful model for examining how cortical circuits are shaped by experience. In this paradigm, altered retinal activity caused by unilateral eye-lid closure leads to dramatic shifts in the binocular response properties of neurons in the visual cortex. Much of the recent progress in identifying the cellular and molecular mechanisms underlying ocular dominance plasticity has been achieved by using the mouse as a model system. In this species, monocular deprivation initiated in adulthood also causes robust ocular dominance shifts. Research on ocular dominance plasticity in the mouse is starting to provide insight into which factors mediate and influence cortical plasticity in juvenile and adult animals.", "title": "" }, { "docid": "c9d3def588f5f3dc95955635ebaa0d3d", "text": "In this paper we propose a novel computer vision method for classifying human facial expression from low resolution images. Our method uses the bag of words representation. It extracts dense SIFT descriptors either from the whole image or from a spatial pyramid that divides the image into increasingly fine sub-regions. Then, it represents images as normalized (spatial) presence vectors of visual words from a codebook obtained through clustering image descriptors. Linear kernels are built for several choices of spatial presence vectors, and combined into weighted sums for multiple kernel learning (MKL). For machine learning, the method makes use of multi-class one-versus-all SVM on the MKL kernel computed using this representation, but with an important twist, the learning is local, as opposed to global – in the sense that, for each face with an unknown label, a set of neighbors is selected to build a local classification model, which is eventually used to classify only that particular face. Empirical results indicate that the use of presence vectors, local learning and spatial information improve recognition performance together by more than 5%. Finally, the proposed model ranked fourth in the Facial Expression Recognition Challenge, with an accuracy of 67.484% on the final test set. ICML 2013 Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. Copyright 2013 by the author(s).", "title": "" }, { "docid": "89a30c32b99fc2cd8e0041aa044b8135", "text": "JO N 89 Charles Edward Beevor was born in London in 1854, and educated at the Blackheath Proprietary School and University College, London. He received his medical training at University College Hospital, took the diploma of MRCS. Eng. in 1878, and graduated MB in the University of London in 1879, and MD in 1881. He obtained the diploma of MRCP Lond. in 1882, and was elected as Fellow in 1888. He became Resident Medical Officer at the National Hospital for the Paralysed and Epileptic, Queen Square, where he was subsequently elected Assistant Physician, then became Physician. He also had studied in Vienna, Leipzig, and Berlin, and was for a good many years Physician to the Great Northern Central Hospital [1]. It is well known that he worked with Sir Victor Horsley in much of his research, especially on cerebral localization. Beevor became Treasurer of the Neurological Society of the United Kingdom for the year 1902, VicePresident in 1905 and President in 1907. His presidential address to the Neurological Society concerned the cerebral arterial supply and was published in Brain in 1907 [2], the journal for which he served on the editorial board assisting Henry Head. According to his obituary, Beevor attended the annual dinner of the Royal Society of Medicine on 4 December 1908, apparently still in good health but in the early hours of the next day his tragic,sudden death occurred from atheromatous disease of the coronary arteries. The name Beevor is associated with the neurological sign of “an upward migration of the umbilicus in the act of sitting up from supine position due to weakness of the lower half of the rectus abdominis”. This is regarded as a reliable sign indicating lesions at the level of the T10–12 spinal cord and/or roots. Beevor described this finding in his 100-page monograph entitled The Croonian Lectures on Muscular Movements and their Representation in the Central Nervous System, published in 1904 [3]. The essential part of his description about the umbilical movements appears in the section “Movements of the Spinal Column” on page 40, in which he stated “I observed a symptom which enables the investigator to tell if there is any weakness of the upper or lower parts of the recti. This symptom is the movement of the umbilicus. In health, in the movement of sitting up the umbilicus dose not alter its position, but if from a lesion of the recti below the umbilicus be paralysed, the normal upper part of the recti will draw up the umbilicus, sometimes to the extent of an inch. As the abdominal wall at the level of the umbilicus is supplied by the tenth dorsal root, any marked elevation of the umbilicus in the act of sitting up would Received: October 2000 Accepted: 5 December 2000", "title": "" }, { "docid": "73973ae6c858953f934396ab62276e0d", "text": "The unsolicited bulk messages are widespread in the applications of short messages. Although the existing spam filters have satisfying performance, they are facing the challenge of an adversary who misleads the spam filters by manipulating samples. Until now, the vulnerability of spam filtering technique for short messages has not been investigated. Different from the other spam applications, a short message only has a few words and its length usually has an upper limit. The current adversarial learning algorithms may not work efficiently in short message spam filtering. In this paper, we investigate the existing good word attack and its counterattack method, i.e. the feature reweighting, in short message spam filtering in an effort to understand whether, and to what extent, they can work efficiently when the length of a message is limited. This paper proposes a good word attack strategy which maximizes the influence to a classifier with the least number of inserted characters based on the weight values and also the length of words. On the other hand, we also proposes the feature reweighting method with a new rescaling function which minimizes the importance of the feature representing a short word in order to require more inserted characters for a successful evasion. The methods are evaluated experimentally by using the SMS and the comment spam dataset. The results confirm that the length of words is a critical factor of the robustness of short message spam filtering to good word attack. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "86a7aa5c2ddcfcefd31e6d7946fdcc3f", "text": "Given a newly posted question on a Question and Answer (Q&A) site, how long will it take until an answer is received? Does response time relate to factors about how the question asker composes their question? If so, what are those factors? With advances in social media and the Web, Q&A sites have become a major source of information for Internet users. Response time of a question is an important aspect in these sites as it is associated with the users' satisfaction and engagement, and thus the lifespan of these online communities. In this paper we study and estimate response time for questions in StackOverflow, a popular online Q&A forum where software developers post and answer questions related to programming. We analyze a long list of factors in the data and identify those that have clear relation with response time. Our key finding is that tag-related factors, such as their “popularity” (how often the tag is used) and the number of their “subscribers” (how many users can answer questions containing the tag), provide much stronger evidence than factors not related to tags. Finally, we learn models using the identified evidential features for predicting the response time of questions, which also demonstrate the significance of tags chosen by the question asker.", "title": "" }, { "docid": "8f50ea6d0907686767f4a3ba94377952", "text": "Establishment and maintenance of the blood system relies on self-renewing hematopoietic stem cells (HSCs) that normally reside in small numbers in the bone marrow niche of adult mammals. This Review describes the developmental origins of HSCs and the molecular mechanisms that regulate lineage-specific differentiation. Studies of hematopoiesis provide critical insights of general relevance to other areas of stem cell biology including the role of cellular interactions in development and tissue homeostasis, lineage programming and reprogramming by transcription factors, and stage- and age-specific differences in cellular phenotypes.", "title": "" } ]
scidocsrr
d8f9b990587bf5674d33191f25c9e0e4
A Low-Rank Approximation Approach to Learning Joint Embeddings of News Stories and Images for Timeline Summarization
[ { "docid": "346e160403ff9eb55c665f6cb8cca481", "text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.", "title": "" }, { "docid": "c8768e560af11068890cc097f1255474", "text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.", "title": "" } ]
[ { "docid": "93297115eb5153a41a79efe582bd34b1", "text": "Abslract Bayesian probabilily theory provides a unifying framework for dara modelling. In this framework the overall aims are to find models that are well-matched to, the &a, and to use &se models to make optimal predictions. Neural network laming is interpreted as an inference of the most probable parameters for Ihe model, given the training data The search in model space (i.e., the space of architectures, noise models, preprocessings, regularizes and weight decay constants) can then also be treated as an inference problem, in which we infer the relative probability of alternative models, given the data. This review describes practical techniques based on G ~ ~ s s ~ M approximations for implementation of these powerful methods for controlling, comparing and using adaptive network$.", "title": "" }, { "docid": "19fe8c6452dd827ffdd6b4c6e28bc875", "text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.", "title": "" }, { "docid": "4918abc325eae43369e9173c2c75706b", "text": "We propose a fast regression model for practical single image super-resolution based on in-place examples, by leveraging two fundamental super-resolution approaches- learning from an external database and learning from self-examples. Our in-place self-similarity refines the recently proposed local self-similarity by proving that a patch in the upper scale image have good matches around its origin location in the lower scale image. Based on the in-place examples, a first-order approximation of the nonlinear mapping function from low-to high-resolution image patches is learned. Extensive experiments on benchmark and real-world images demonstrate that our algorithm can produce natural-looking results with sharp edges and preserved fine details, while the current state-of-the-art algorithms are prone to visual artifacts. Furthermore, our model can easily extend to deal with noise by combining the regression results on multiple in-place examples for robust estimation. The algorithm runs fast and is particularly useful for practical applications, where the input images typically contain diverse textures and they are potentially contaminated by noise or compression artifacts.", "title": "" }, { "docid": "607247339e5bb0299f06db3104deef77", "text": "This paper discusses the advantages of using the ACT-R cognitive architecture over the Prolog programming language for the research and development of a large-scale, functional, cognitively motivated model of natural language analysis. Although Prolog was developed for Natural Language Processing (NLP), it lacks any probabilistic mechanisms for dealing with ambiguity and relies on failure detection and algorithmic backtracking to explore alternative analyses. These mechanisms are problematic for handling ill-formed or unexpected inputs, often resulting in an exploration of the entire search space, which becomes intractable as the complexity and variability of the allowed inputs and corresponding grammar grow. By comparison, ACT-R provides context dependent and probabilistic mechanisms which allow the model to incrementally pursue the best analysis. When combined with a nonmonotonic context accommodation mechanism that supports modest adjustment of the evolving analysis to handle cases where the locally best analysis is not globally preferred, the result is an efficient pseudo-deterministic mechanism that obviates the need for failure detection and backtracking, aligns with our basic understanding of Human Language Processing (HLP) and is scalable to broad coverage. The successful transition of the natural language analysis model from Prolog to ACT-R suggests that a cognitively motivated approach to natural language analysis may also be suitable for achieving a functional capability.", "title": "" }, { "docid": "582aed7bc35603a67d5ff2e5c6e9da28", "text": "In this article we use machine activity metrics to automatically distinguish between malicious and trusted portable executable software samples. The motivation stems from the growth of cyber attacks using techniques that have been employed to surreptitiously deploy Advanced Persistent Threats (APTs). APTs are becoming more sophisticated and able to obfuscate much of their identifiable features through encryption, custom code bases and inmemory execution. Our hypothesis is that we can produce a high degree of accuracy in distinguishing malicious from trusted samples using Machine Learning with features derived from the inescapable footprint left behind on a computer system during execution. This includes CPU, RAM, Swap use and network traffic at a count level of bytes and packets. These features are continuous and allow us to be more flexible with the classification of samples than discrete features such as API calls (which can also be obfuscated) that form the main feature of the extant literature. We use these continuous data and develop a novel classification method using Self Organizing Feature Maps to reduce over fitting during training through the ability to create unsupervised clusters of similar “behaviour” that are subsequently used as features for classification, rather than using the raw data. We compare our method to a set of machine classification methods that have been applied in previous research and demonstrate an increase of between 7.24% and 25.68% in classification accuracy using our method and an unseen dataset over the range of other machine classification methods that have been applied in previous research. © 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).", "title": "" }, { "docid": "cca235b52cc6e7b52febecf15a1ad599", "text": "In this work, we investigated the use of noninvasive, targeted transcutaneous electrical nerve stimulation (TENS) of peripheral nerves to provide sensory feedback to two amputees, one with targeted sensory reinnervation (TSR) and one without TSR. A major step in developing a closed-loop prosthesis is providing the sense of touch back to the amputee user. We investigated the effect of targeted nerve stimulation amplitude, pulse width, and frequency on stimulation perception. We discovered that both subjects were able to reliably detect stimulation patterns with pulses less than 1 ms. We utilized the psychophysical results to produce a subject specific stimulation pattern using a leaky integrate and fire (LIF) neuron model from force sensors on a prosthetic hand during a grasping task. For the first time, we show that TENS is able to provide graded sensory feedback at multiple sites in both TSR and non-TSR amputees while using behavioral results to tune a neuromorphic stimulation pattern driven by a force sensor output from a prosthetic hand.", "title": "" }, { "docid": "7cad8fccadff2d8faa8a372c6237469e", "text": "In the spirit of the tremendous success of deep Convolutional Neural Networks as generic feature extractors from images, we propose Timenet : a multilayered recurrent neural network (RNN) trained in an unsupervised manner to extract features from time series. Fixed-dimensional vector representations or embeddings of variable-length sentences have been shown to be useful for a variety of document classification tasks. Timenet is the encoder network of an auto-encoder based on sequence-to-sequence models that transforms varying length time series to fixed-dimensional vector representations. Once Timenet is trained on diverse sets of time series, it can then be used as a generic off-the-shelf feature extractor for time series. We train Timenet on time series from 24 datasets belonging to various domains from the UCR Time Series Classification Archive, and then evaluate embeddings from Timenet for classification on 30 other datasets not used for training the Timenet. We observe that a classifier learnt over the embeddings obtained from a pre-trained Timenet yields significantly better performance compared to (i) a classifier learnt over the embeddings obtained from the encoder network of a domain-specific auto-encoder, as well as (ii) a nearest neighbor classifier based on the well-known and effective Dynamic Time Warping (DTW) distance measure. We also observe that a classifier trained on embeddings from Timenet give competitive results in comparison to a DTW-based classifier even when using significantly smaller set of labeled training data, providing further evidence that Timenet embeddings are robust. Finally, t-SNE visualizations of Timenet embeddings show that time series from different classes form well-separated clusters.", "title": "" }, { "docid": "61b6cf4bc86ae9a817f6e809fdf59ad2", "text": "In the last few years, phishing scams have rapidly grown posing huge threat to global Internet security. Today, phishing attack is one of the most common and serious threats over Internet where cyber attackers try to steal user’s personal or financial credentials by using either malwares or social engineering. Detection of phishing attacks with high accuracy has always been an issue of great interest. Recent developments in phishing detection techniques have led to various new techniques, specially designed for phishing detection where accuracy is extremely important. Phishing problem is widely present as there are several ways to carry out such an attack, which implies that one solution is not adequate to address it. Two main issues are addressed in our paper. First, we discuss in detail phishing attacks, history of phishing attacks and motivation of attacker behind performing this attack. In addition, we also provide taxonomy of various types of phishing attacks. Second, we provide taxonomy of various solutions proposed in the literature to detect and defend from phishing attacks. In addition, we also discuss various issues and challenges faced in dealing with phishing attacks and spear phishing and how phishing is now targeting the emerging domain of IoT. We discuss various tools and datasets that are used by the researchers for the evaluation of their approaches. This provides better understanding of the problem, current solution space and future research scope to efficiently deal with such attacks.", "title": "" }, { "docid": "9787d99954114de7ddd5a58c18176380", "text": "This paper presents a system for acoustic event detection in recordings from real life environments. The events are modeled using a network of hidden Markov models; their size and topology is chosen based on a study of isolated events recognition. We also studied the effect of ambient background noise on event classification performance. On real life recordings, we tested recognition of isolated sound events and event detection. For event detection, the system performs recognition and temporal positioning of a sequence of events. An accuracy of 24% was obtained in classifying isolated sound events into 61 classes. This corresponds to the accuracy of classifying between 61 events when mixed with ambient background noise at 0dB signal-to-noise ratio. In event detection, the system is capable of recognizing almost one third of the events, and the temporal positioning of the events is not correct for 84% of the time.", "title": "" }, { "docid": "a97f71e0d5501add1ae08eeee5378045", "text": "Machine learning is being implemented in bioinformatics and computational biology to solve challenging problems emerged in the analysis and modeling of biological data such as DNA, RNA, and protein. The major problems in classifying protein sequences into existing families/superfamilies are the following: the selection of a suitable sequence encoding method, the extraction of an optimized subset of features that possesses significant discriminatory information, and the adaptation of an appropriate learning algorithm that classifies protein sequences with higher classification accuracy. The accurate classification of protein sequence would be helpful in determining the structure and function of novel protein sequences. In this article, we have proposed a distance-based sequence encoding algorithm that captures the sequence’s statistical characteristics along with amino acids sequence order information. A statistical metric-based feature selection algorithm is then adopted to identify the reduced set of features to represent the original feature space. The performance of the proposed technique is validated using some of the best performing classifiers implemented previously for protein sequence classification. An average classification accuracy of 92% was achieved on the yeast protein sequence data set downloaded from the benchmark UniProtKB database.", "title": "" }, { "docid": "5feea8e7bcb96c826bdf19922e47c922", "text": "This chapter is a review of conceptions of knowledge as they appear in selected bodies of research on teaching. Writing as a philosopher of education, my interest is in how notions of knowledge are used and analyzed in a number of research programs that study teachers and their teaching. Of particular interest is the growing research literature on the knowledge that teachers generate as a result of their experience as teachers, in contrast to the knowledge of teaching that is generated by those who specialize in research on teaching. This distinction, as will become apparent, is one that divides more conventional scientific approaches to the study of teaching from what might be thought of as alternative approaches.", "title": "" }, { "docid": "de1ed7fbb69e5e33e17d1276d265a3e1", "text": "Abnormal glucose metabolism and enhanced oxidative stress accelerate cardiovascular disease, a chronic inflammatory condition causing high morbidity and mortality. Here, we report that in monocytes and macrophages of patients with atherosclerotic coronary artery disease (CAD), overutilization of glucose promotes excessive and prolonged production of the cytokines IL-6 and IL-1β, driving systemic and tissue inflammation. In patient-derived monocytes and macrophages, increased glucose uptake and glycolytic flux fuel the generation of mitochondrial reactive oxygen species, which in turn promote dimerization of the glycolytic enzyme pyruvate kinase M2 (PKM2) and enable its nuclear translocation. Nuclear PKM2 functions as a protein kinase that phosphorylates the transcription factor STAT3, thus boosting IL-6 and IL-1β production. Reducing glycolysis, scavenging superoxide and enforcing PKM2 tetramerization correct the proinflammatory phenotype of CAD macrophages. In essence, PKM2 serves a previously unidentified role as a molecular integrator of metabolic dysfunction, oxidative stress and tissue inflammation and represents a novel therapeutic target in cardiovascular disease.", "title": "" }, { "docid": "cd2e7e24b4d8fc12df4f866b4c4e9da2", "text": "The extracellular matrix (ECM) is a major component of tumors and a significant contributor to cancer progression. In this study, we use proteomics to investigate the ECM of human mammary carcinoma xenografts and show that primary tumors of differing metastatic potential differ in ECM composition. Both tumor cells and stromal cells contribute to the tumor matrix and tumors of differing metastatic ability differ in both tumor- and stroma-derived ECM components. We define ECM signatures of poorly and highly metastatic mammary carcinomas and these signatures reveal up-regulation of signaling pathways including TGFβ and VEGF. We further demonstrate that several proteins characteristic of highly metastatic tumors (LTBP3, SNED1, EGLN1, and S100A2) play causal roles in metastasis, albeit at different steps. Finally we show that high expression of LTBP3 and SNED1 correlates with poor outcome for ER(-)/PR(-)breast cancer patients. This study thus identifies novel biomarkers that may serve as prognostic and diagnostic tools. DOI: http://dx.doi.org/10.7554/eLife.01308.001.", "title": "" }, { "docid": "951532d8e0bea472139298de9c5e9842", "text": "Alzheimer's disease (AD), the most common form of dementia, shares many aspects of abnormal brain aging. We present a novel magnetic resonance imaging (MRI)-based biomarker that predicts the individual progression of mild cognitive impairment (MCI) to AD on the basis of pathological brain aging patterns. By employing kernel regression methods, the expression of normal brain-aging patterns forms the basis to estimate the brain age of a given new subject. If the estimated age is higher than the chronological age, a positive brain age gap estimation (BrainAGE) score indicates accelerated atrophy and is considered a risk factor for conversion to AD. Here, the BrainAGE framework was applied to predict the individual brain ages of 195 subjects with MCI at baseline, of which a total of 133 developed AD during 36 months of follow-up (corresponding to a pre-test probability of 68%). The ability of the BrainAGE framework to correctly identify MCI-converters was compared with the performance of commonly used cognitive scales, hippocampus volume, and state-of-the-art biomarkers derived from cerebrospinal fluid (CSF). With accuracy rates of up to 81%, BrainAGE outperformed all cognitive scales and CSF biomarkers in predicting conversion of MCI to AD within 3 years of follow-up. Each additional year in the BrainAGE score was associated with a 10% greater risk of developing AD (hazard rate: 1.10 [CI: 1.07-1.13]). Furthermore, the post-test probability was increased to 90% when using baseline BrainAGE scores to predict conversion to AD. The presented framework allows an accurate prediction even with multicenter data. Its fast and fully automated nature facilitates the integration into the clinical workflow. It can be exploited as a tool for screening as well as for monitoring treatment options.", "title": "" }, { "docid": "ce8729f088aaf9f656c9206fc67ff4bd", "text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.", "title": "" }, { "docid": "d44a76f19aa8292b156914e821b1361d", "text": "Current concepts in the steps of upper limb development and the way the limb is patterned along its 3 spatial axes are reviewed. Finally, the embryogenesis of various congenital hand anomalies is delineated with an emphasis on the pathogenetic basis for each anomaly.", "title": "" }, { "docid": "4790a2dfcdf74d5c9ae5ae8c9f42eb0b", "text": "Inspired by the success of deploying deep learning in the fields of Computer Vision and Natural Language Processing, this learning paradigm has also found its way into the field of Music Information Retrieval. In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach. In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task. The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g., music audio), the generated deep representation of the data is also informative for the new task. Since, however, most of the networks used to generate deep representations are trained using a single initial learning source, their representation is unlikely to be informative for all possible future tasks. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain. We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations. We then validate these representations considering multiple target datasets for evaluation. The results of our experiments yield several insights into how to approach the design of methods for learning widely deployable deep data representations in the music domain.", "title": "" }, { "docid": "583e56fcef68f697d19b179766341aba", "text": "We recorded echolocation calls from 14 sympatric species of bat in Britain. Once digitised, one temporal and four spectral features were measured from each call. The frequency-time course of each call was approximated by fitting eight mathematical functions, and the goodness of fit, represented by the mean-squared error, was calculated. Measurements were taken using an automated process that extracted a single call from background noise and measured all variables without intervention. Two species of Rhinolophus were easily identified from call duration and spectral measurements. For the remaining 12 species, discriminant function analysis and multilayer back-propagation perceptrons were used to classify calls to species level. Analyses were carried out with and without the inclusion of curve-fitting data to evaluate its usefulness in distinguishing among species. Discriminant function analysis achieved an overall correct classification rate of 79% with curve-fitting data included, while an artificial neural network achieved 87%. The removal of curve-fitting data improved the performance of the discriminant function analysis by 2 %, while the performance of a perceptron decreased by 2 %. However, an increase in correct identification rates when curve-fitting information was included was not found for all species. The use of a hierarchical classification system, whereby calls were first classified to genus level and then to species level, had little effect on correct classification rates by discriminant function analysis but did improve rates achieved by perceptrons. This is the first published study to use artificial neural networks to classify the echolocation calls of bats to species level. Our findings are discussed in terms of recent advances in recording and analysis technologies, and are related to factors causing convergence and divergence of echolocation call design in bats.", "title": "" }, { "docid": "ed44c393c44ee6e63cab1305146a4f9d", "text": "This paper presents a novel method for online and incremental appearance-based localization and mapping in a highly dynamic environment. Using position-invariant robust features (PIRFs), the method can achieve a high rate of recall with 100% precision. It can handle both strong perceptual aliasing and dynamic changes of places efficiently. Its performance also extends beyond conventional images; it is applicable to omnidirectional images for which the major portions of scenes are similar for most places. The proposed PIRF-based Navigation method named PIRF-Nav is evaluated by testing it on two standard datasets as is in FAB-MAP and on an additional omnidirectional image dataset that we collected. This extra dataset is collected on two days with different specific events, i.e., an open-campus event, to present challenges related to illumination variance and strong dynamic changes, and to test assessment of dynamic scene changes. Results show that PIRF-Nav outperforms FAB-MAP; PIRF-Nav at precision-1 yields a recall rate about two times (approximately 80%) higher than that of FAB-MAP. Its computation time is sufficiently short for real-time applications. The method is fully incremental, and requires no offline process for dictionary creation. Additional testing using combined datasets proves that PIRF-Nav can function over a long term and can solve the kidnapped robot problem.", "title": "" }, { "docid": "dbb21f81126dd049a569b26596151409", "text": "A flexible statistical framework is developed for the analysis of read counts from RNA-Seq gene expression studies. It provides the ability to analyse complex experiments involving multiple treatment conditions and blocking variables while still taking full account of biological variation. Biological variation between RNA samples is estimated separately from the technical variation associated with sequencing technologies. Novel empirical Bayes methods allow each gene to have its own specific variability, even when there are relatively few biological replicates from which to estimate such variability. The pipeline is implemented in the edgeR package of the Bioconductor project. A case study analysis of carcinoma data demonstrates the ability of generalized linear model methods (GLMs) to detect differential expression in a paired design, and even to detect tumour-specific expression changes. The case study demonstrates the need to allow for gene-specific variability, rather than assuming a common dispersion across genes or a fixed relationship between abundance and variability. Genewise dispersions de-prioritize genes with inconsistent results and allow the main analysis to focus on changes that are consistent between biological replicates. Parallel computational approaches are developed to make non-linear model fitting faster and more reliable, making the application of GLMs to genomic data more convenient and practical. Simulations demonstrate the ability of adjusted profile likelihood estimators to return accurate estimators of biological variability in complex situations. When variation is gene-specific, empirical Bayes estimators provide an advantageous compromise between the extremes of assuming common dispersion or separate genewise dispersion. The methods developed here can also be applied to count data arising from DNA-Seq applications, including ChIP-Seq for epigenetic marks and DNA methylation analyses.", "title": "" } ]
scidocsrr
f966943eb26b42b3ae0de29ee67e56f3
A humanoid upper body system for two-handed manipulation
[ { "docid": "51426b334712f2e4b227e59f5701f88f", "text": "The control of humanoid manipulators is very challenging due to the large number of degrees of freedom and the resulting redundancy. Using joint-level control complex planning algorithms are needed to accomplish tasks. For intuitive operation and hence short development times of applications high-level control interfaces are needed. Furthermore, for many tasks it is desirable to define an impedance behavior in task space. In this paper a flexible control law is proposed which offers object-level impedances for two-handed manipulation. The controller structure is based on the well-known compliance control law. The main contributions of this work are the way how to combine several potential functions for two-handed manipulation and the experimental validation of hand-arm coordination. The controller is implemented on DLR's humanoid manipulator Justin and its performance is demonstrated experimentally by unscrewing a can and motion of a grasped box.", "title": "" }, { "docid": "164879a016e455123c3b3c94d291ebf7", "text": "A robot manipulator sharing its workspace with humans should be able to quickly detect collisions and safely react for limiting injuries due to physical contacts. In the absence of external sensing, relative motions between robot and human are not predictable and unexpected collisions may occur at any location along the robot arm. Based on physical quantities such as total energy and generalized momentum of the robot manipulator, we present an efficient collision detection method that uses only proprioceptive robot sensors and provides also directional information for a safe robot reaction after collision. The approach is first developed for rigid robot arms and then extended to the case of robots with elastic joints, proposing different reaction strategies. Experimental results on collisions with the DLR-III lightweight manipulator are reported", "title": "" }, { "docid": "81b03da5e09cb1ac733c966b33d0acb1", "text": "Abstrud In the last two years a third generation of torque-controlled light weight robots has been developed in DLR‘s robotics and mechatronics lab which is based on all the experiences that have been made with the first two generations. It aims at reaching the limits of what seems achievable with present day technologies not only with respect to light-weight, but also with respect to minimal power consumption and losses. One of the main gaps we tried to close in version III was the development of a new, robot-dedicated high energy motor designed with the best available techniques of concurrent engineering, and the renewed efforts to save weight in the links by using ultralight carbon fibres.", "title": "" } ]
[ { "docid": "51624e6c70f4eb5f2295393c68ee386c", "text": "Advances in mobile technologies and devices has changed the way users interact with devices and other users. These new interaction methods and services are offered by the help of intelligent sensing capabilities, using context, location and motion sensors. However, indoor location sensing is mostly achieved by utilizing radio signal (Wi-Fi, Bluetooth, GSM etc.) and nearest neighbor identification. The most common algorithm adopted for Received Signal Strength (RSS)-based location sensing is K Nearest Neighbor (KNN), which calculates K nearest neighboring points to mobile users (MUs). Accordingly, in this paper, we aim to improve the KNN algorithm by enhancing the neighboring point selection by applying k-means clustering approach. In the proposed method, k-means clustering algorithm groups nearest neighbors according to their distance to mobile user. Then the closest group to the mobile user is used to calculate the MU's location. The evaluation results indicate that the performance of clustered KNN is closely tied to the number of clusters, number of neighbors to be clustered and the initiation of the center points in k-mean algorithm. Keywords-component; Received signal strength, k-Means, clustering, location estimation, personal digital assistant (PDA), wireless, indoor positioning", "title": "" }, { "docid": "56495132d3af1da389da3683432eb704", "text": "This paper discusses an object orient approach based on design pattern and computational reflection concept to implement nonfunctional requirements of complex control system. Firstly we brief about software architecture design, followed by control-monitor safety pattern, Tri-Modular redundancy (TMR) pattern, reflective state pattern and fault tolerance redundancy patterns that are use for safety and fault management. Reflection state pattern is a refinement of the state design pattern based on reflection architectural pattern. With variation in reflective design pattern we can develop a well structured fault tolerant system. The main goal of this paper is to separate control and safety aspect from the application logic. It details its intent, motivation, participants, consequences and implementation of safety design pattern. General Terms Design pattern, Safety pattern, Fault tolerance.", "title": "" }, { "docid": "cd61c0b8c1b0f304fa318b22f0577c33", "text": "Software Defined Networking (SDN) is a concept which provides the network operators and data centres to flexibly manage their networking equipment using software running on external servers. According to the SDN framework, the control and management of the networks, which is usually implemented in software, is decoupled from the data plane. On the other hand cloud computing materializes the vision of utility computing. Tenants can benefit from on-demand provisioning of networking, storage and compute resources according to a pay-per-use business model. In this work we present the networking issues in IaaS and networking and federation challenges that are currently addressed with existing technologies. We also present innovative software-define networking proposals, which are applied to some of the challenges and could be used in future deployments as efficient solutions. cloud computing networking and the potential contribution of software-defined networking along with some performance evaluation results are presented in this paper.", "title": "" }, { "docid": "f0ea74e3a3ab58435d750bd2e476d002", "text": "This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment.", "title": "" }, { "docid": "2ca43ef1b7a919e1de0ea2bb01b9c308", "text": "As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a user’s private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebook’s privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced", "title": "" }, { "docid": "37845c0912d9f1b355746f41c7880c3a", "text": "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.", "title": "" }, { "docid": "f35d164bd1b19f984b10468c41f149e3", "text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.", "title": "" }, { "docid": "e4b0ac07d84c51e5c9251f907b597ab9", "text": "Audio fingerprinting, also named as audio hashing, has been well-known as a powerful technique to perform audio identification and synchronization. It basically involves two major steps: fingerprint (voice pattern) design and matching search. While the first step concerns the derivation of a robust and compact audio signature, the second step usually requires knowledge about database and quick-search algorithms. Though this technique offers a wide range of real-world applications, to the best of the authors’ knowledge, a comprehensive survey of existing algorithms appeared more than eight years ago. Thus, in this paper, we present a more up-to-date review and, for emphasizing on the audio signal processing aspect, we focus our state-of-the-art survey on the fingerprint design step for which various audio features and their tractable statistical models are discussed. Keywords–Voice pattern; audio identification and synchronization; spectral features; statistical models.", "title": "" }, { "docid": "e812afed86c4481c70cb80985cc3dc13", "text": "Viruses that cause chronic infection constitute a stable but little-recognized part of our metagenome: our virome. Ongoing immune responses hold these chronic viruses at bay while avoiding immunopathologic damage to persistently infected tissues. The immunologic imprint generated by these responses to our virome defines the normal immune system. The resulting dynamic but metastable equilibrium between the virome and the host can be dangerous, benign, or even symbiotic. These concepts require that we reformulate how we assign etiologies for diseases, especially those with a chronic inflammatory component, as well as how we design and interpret genome-wide association studies, and how we vaccinate to limit or control our virome.", "title": "" }, { "docid": "2966dd1e2cd26b7c956d296ef6eb501e", "text": "Information extraction from microblog posts is an important task, as today microblogs capture an unprecedented amount of information and provide a view into the pulse of the world. As the core component of information extraction, we consider the task of Twitter entity linking in this paper. In the current entity linking literature, mention detection and entity disambiguation are frequently cast as equally important but distinct problems. However, in our task, we find that mention detection is often the performance bottleneck. The reason is that messages on micro-blogs are short, noisy and informal texts with little context, and often contain phrases with ambiguous meanings. To rigorously address the Twitter entity linking problem, we propose a structural SVM algorithm for entity linking that jointly optimizes mention detection and entity disambiguation as a single end-to-end task. By combining structural learning and a variety of firstorder, second-order, and context-sensitive features, our system is able to outperform existing state-of-the art entity linking systems by 15% F1.", "title": "" }, { "docid": "70be8e5a26cb56fdd2c230cf36e00364", "text": "If investors are not fully rational, what can smart money do? This paper provides an example in which smart money can strategically take advantage of investors’ behavioral biases and manipulate the price process to make profit. The paper considers three types of traders, behavior-driven investors who are less willing to sell losers than to sell winners (dispositional effect), arbitrageurs, and a manipulator who can influence asset prices. We show that, due to the investors’ behavioral biases and the limit of arbitrage, the manipulator can profit from a “pump-and-dump” trading strategy by accumulating the speculative asset while pushing the asset price up, and then selling the asset at high prices. Since nobody has private information, manipulation here is completely trade-based. The paper also endogenously derives several asset-pricing anomalies, including excess volatility, momentum and reversal. As an empirical test, the paper presents some empirical evidence from the U.S. SEC prosecution of “pump-and-dump” manipulation cases that are consistent with our model. JEL: G12, G18", "title": "" }, { "docid": "b43c4d5d97120963a3ea84a01d029819", "text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.", "title": "" }, { "docid": "b61a7e1ee0f8100016f61b766332d38f", "text": "We study the cost function for hierarchical clusterings introduced by [Dasgupta, 2016] where hierarchies are treated as first-class objects rather than deriving their cost from projections into flat clusters. It was also shown in [Dasgupta, 2016] that a top-down algorithm returns a hierarchical clustering of cost at most O (αn log n) times the cost of the optimal hierarchical clustering, where αn is the approximation ratio of the Sparsest Cut subroutine used. Thus using the best known approximation algorithm for Sparsest Cut due to Arora-Rao-Vazirani, the top-down algorithm returns a hierarchical clustering of cost at most O ( log3/2 n ) times the cost of the optimal solution. We improve this by giving an O(log n)approximation algorithm for this problem. Our main technical ingredients are a combinatorial characterization of ultrametrics induced by this cost function, deriving an Integer Linear Programming (ILP) formulation for this family of ultrametrics, and showing how to iteratively round an LP relaxation of this formulation by using the idea of sphere growing which has been extensively used in the context of graph partitioning. We also prove that our algorithm returns an O(log n)-approximate hierarchical clustering for a generalization of this cost function also studied in [Dasgupta, 2016]. Experiments show that the hierarchies found by using the ILP formulation as well as our rounding algorithm often have better projections into flat clusters than the standard linkage based algorithms. We conclude with constant factor inapproximability results for this problem: 1) no polynomial size LP or SDP can achieve a constant factor approximation for this problem and 2) no polynomial time algorithm can achieve a constant factor approximation under the assumption of the Small Set Expansion hypothesis.", "title": "" }, { "docid": "32b860121b49bd3a61673b3745b7b1fd", "text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.", "title": "" }, { "docid": "ca58a73d73f4174367cdee6b5269379c", "text": "Data noising is an effective technique for regularizing neural network models. While noising is widely adopted in application domains such as vision and speech, commonly used noising primitives have not been developed for discrete sequencelevel settings such as language modeling. In this paper, we derive a connection between input noising in neural network language models and smoothing in ngram models. Using this connection, we draw upon ideas from smoothing to develop effective noising schemes. We demonstrate performance gains when applying the proposed schemes to language modeling and machine translation. Finally, we provide empirical analysis validating the relationship between noising and smoothing.", "title": "" }, { "docid": "34f7497eaae4a6b56089889781935263", "text": "The research on two-wheeled inverted pendulum (T-WIP) mobile robots or commonly known as balancing robots have gained momentum over the last decade in a number of robotic laboratories around the world (Solerno & Angeles, 2003;Grasser et al., 2002; Solerno & Angeles, 2007;Koyanagi, Lida & Yuta, 1992;Ha & Yuta, 1996; Kim, Kim & Kwak, 2003). This chapter describes the hardware design of such a robot. The objective of the design is to develop a T-WIP mobile robot as well as MATLABTM interfacing configuration to be used as flexible platform which comprises of embedded unstable linear plant intended for research and teaching purposes. Issues such as selection of actuators and sensors, signal processing units, MATLABTM Real Time Workshop coding, modeling and control scheme is addressed and discussed. The system is then tested using a well-known state feedback controller to verify its functionality.", "title": "" }, { "docid": "c8a9aff29f3e420a1e0442ae7caa46eb", "text": "Four new species of Ixora (Rubiaceae, Ixoreae) from Brazil are described and illustrated and their relationships to morphologically similar species as well as their conservation status are discussed. The new species, Ixora cabraliensis, Ixora emygdioi, Ixora grazielae, and Ixora pilosostyla are endemic to the Atlantic Forest of southern Bahia and Espirito Santo. São descritas e ilustradas quatro novas espécies de Ixora (Rubiaceae, Ixoreae) para o Brasil bem como discutidos o relacionamento morfológico com espécies mais similares e o estado de conservação. As novas espécies Ixora cabraliensis, Ixora emygdioi, Ixora grazielae e Ixora pilosostyla são endêmicas da Floresta Atlântica, no trecho do sul do estado da Bahia e o estado do Espírito Santo.", "title": "" }, { "docid": "b2e958ceedce24bf6cd5e448d0b9ec84", "text": "In this paper, we propose a real-time online shopper behavior analysis system consisting of two modules which simultaneously predicts the visitor’s shopping intent and Web site abandonment likelihood. In the first module, we predict the purchasing intention of the visitor using aggregated pageview data kept track during the visit along with some session and user information. The extracted features are fed to random forest (RF), support vector machines (SVMs), and multilayer perceptron (MLP) classifiers as input. We use oversampling and feature selection preprocessing steps to improve the performance and scalability of the classifiers. The results show that MLP that is calculated using resilient backpropagation algorithm with weight backtracking produces significantly higher accuracy and F1 Score than RF and SVM. Another finding is that although clickstream data obtained from the navigation path followed during the online visit convey important information about the purchasing intention of the visitor, combining them with session information-based features that possess unique information about the purchasing interest improves the success rate of the system. In the second module, using only sequential clickstream data, we train a long short-term memory-based recurrent neural network that generates a sigmoid output showing the probability estimate of visitor’s intention to leave the site without finalizing the transaction in a prediction horizon. The modules are used together to determine the visitors which have purchasing intention but are likely to leave the site in the prediction horizon and take actions accordingly to improve the Web site abandonment and purchase conversion rates. Our findings support the feasibility of accurate and scalable purchasing intention prediction for virtual shopping environment using clickstream and session information data.", "title": "" }, { "docid": "ef142067a29f8662e36d68ee37c07bce", "text": "The lack of assessment tools to analyze serious games and insufficient knowledge on their impact on players is a recurring critique in the field of game and media studies, education science and psychology. Although initial empirical studies on serious games usage deliver discussable results, numerous questions remain unacknowledged. In particular, questions regarding the quality of their formal conceptual design in relation to their purpose mostly stay uncharted. In the majority of cases the designers' good intentions justify incoherence and insufficiencies in their design. In addition, serious games are mainly assessed in terms of the quality of their content, not in terms of their intention-based design. This paper argues that analyzing a game's formal conceptual design, its elements, and their relation to each other based on the game's purpose is a constructive first step in assessing serious games. By outlining the background of the Serious Game Design Assessment Framework and exemplifying its use, a constructive structure to examine purpose-based games is introduced. To demonstrate how to assess the formal conceptual design of serious games we applied the SGDA Framework to the online games \"Sweatshop\" (2011) and \"ICED\" (2008).", "title": "" } ]
scidocsrr
8d175e76f2d63b6ec6f8201f814ee537
Convex-Hull-Based Boundary Detection in Unattended Wireless Sensor Networks
[ { "docid": "4faa5fd523361d472fc0bea8508c58f8", "text": "This paper reviews the current state of laser scanning from airborne and terrestrial platforms for geometric reconstruction of object shape and size. The current performance figures of sensor systems are presented in an overview. Next, their calibration and the orientation of the acquired point clouds is discussed. For airborne deployment this is usually one step, whereas in the terrestrial case laboratory calibration and registration of point clouds are (still) two distinct, independent steps. As laser scanning is an active measurement technology, the interaction of the emitted energy with the object surface has influences on the range measurement. This has to be considered in order to explain geometric phenomena in the data. While the problems, e.g. multiple scattering, are understood well, there is currently a lack of remedies. Then, in analogy to the processing chain, segmentation approaches for laser scanning data are reviewed. Segmentation is a task relevant for almost all applications. Likewise, DTM (digital terrain model) reconstruction is relevant for many applications of airborne laser scanning, and is therefore discussed, too. This paper reviews the main processing steps necessary for many applications of laser scanning.", "title": "" } ]
[ { "docid": "689d09822d1ac86a173cde6a6018a8fe", "text": "Novelty detection in time series is an important problem with application in a number of different domains such as machine failure detection and fraud detection in financial systems. One of the methods for detecting novelties in time series consists of building a forecasting model that is later used to predict future values. Novelties are assumed to take place if the difference between predicted and observed values is above a certain threshold. The problem with this method concerns the definition of a suitable value for the threshold. This paper proposes a method based on forecasting with robust confidence intervals for defining the thresholds for detecting novelties. Experiments with six real-world time series are reported and the results show that the method is able to correctly define the thresholds for novelty detection. r 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "670d4860fc3172b7ffa429268462b64d", "text": "This article describes the benefits and risks of providing RPDs. It emphasises the importance of co-operation between the dental team and patient to ensure that the balance of this 'equation' is in the patient's favour.", "title": "" }, { "docid": "a083a09e0b156781d1a782e2b6951c9d", "text": "If a person with carious lesions needs or requests crowns or inlays, these dental fillings have to be manufactured for each tooth and each person individually. We survey computer vision techniques which can be used to automate this process. We introduce three particular applications which are concerned with the reconstruction of surface information. The first one aims at building up a database of normalized depth images of posterior teeth and at extracting characteristic features from these images. In the second application, a given occlusal surface of a posterior tooth with a prepared cavity is digitally reconstructed using an intact model tooth from a given database. The calculated surface data can then be used for automatic milling of a dental prosthesis, e.g. from a preshaped ceramic block. In the third application a hand-made provisoric wax inlay or crown can be digitally scanned by a laser sensor and copied three dimensionally into a different material such as ceramic. The results are converted to a format required by the computer-integrated manufacturing (CIM) system for automatic milling.", "title": "" }, { "docid": "d9ac3ee5ccfa160da42bc740d35faa6f", "text": "This study aimed to determine the prevalence and sources of stress among Thai medical students. The questionnaires,which consisted of the Thai Stress Test (TST) and questions asking about sources of stress, were sent to all medical students in the Faculty of Medicine, Ramathibodi Hospital, Thailand. A total of 686 students participated. The results showed that about 61.4% of students had some degree of stress. Seventeen students (2.4%) reported a high level of stress. The prevalence of stress is highest among third-year medical students. Academic problems were found to be a major cause of stress among all students. The most prevalent source of academic stress was the test/exam. Other sources of stress in medical school and their relationships are also discussed. The findings can help medical teachers understand more about stress among their students and guide the way to improvement in an academic context, which is important for student achievement.", "title": "" }, { "docid": "26e79793addc4750dcacc0408764d1e1", "text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.", "title": "" }, { "docid": "0f2021268693abc34ac6ca7ddcc12534", "text": "The purpose of this article is to discuss the scope and functionality of a versatile environment for testing small- and large-scale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we believe that they will be useful in their own right and should be available to researchers for their development of optimization software. The tools can be obtained by anonymous ftp from a number of sources and may, in many cases, be installed automatically. The scope of a major collection of test problems written in the standard input format (SIF) used by the LANCELOT software package is described. Recognizing that most software was not written with the SIF in mind, we provide tools to assist in building an interface between this input format and other optimization packages. These tools provide a link between the SIF and a number of existing packages, including MINOS and OSL. Additionally, as each problem includes a specific classification that is designed to be useful in identifying particular classes of problems, facilities are provided to build and manage a database of this information. There is a Unix and C shell bias to many of the descriptions in the article, since, for the sake of simplicity, we do not illustrate everything in its fullest generality. We trust that the majority of potential users are sufficiently familiar with Unix that these examples will not lead to undue confusion.", "title": "" }, { "docid": "0c790dd7d95ac5c67f5f4e8859c5c20e", "text": "Many rhizospheric bacterial strains possess plant growth-promoting mechanisms. These bacteria can be applied as biofertilizers in agriculture and forestry, enhancing crop yields. Bacterial biofertilizers can improve plant growth through several different mechanisms: (i) the synthesis of plant nutrients or phytohormones, which can be absorbed by plants, (ii) the mobilization of soil compounds, making them available for the plant to be used as nutrients, (iii) the protection of plants under stressful conditions, thereby counteracting the negative impacts of stress, or (iv) defense against plant pathogens, reducing plant diseases or death. Several plant growth-promoting rhizobacteria (PGPR) have been used worldwide for many years as biofertilizers, contributing to increasing crop yields and soil fertility and hence having the potential to contribute to more sustainable agriculture and forestry. The technologies for the production and application of bacterial inocula are under constant development and improvement and the bacterial-based biofertilizer market is growing steadily. Nevertheless, the production and application of these products is heterogeneous among the different countries in the world. This review summarizes the main bacterial mechanisms for improving crop yields, reviews the existing technologies for the manufacture and application of beneficial bacteria in the field, and recapitulates the status of the microbe-based inoculants in World", "title": "" }, { "docid": "ada881da62d4ceff774cce82dde3c738", "text": "Characterizing information diffusion on social platforms like Twitter enables us to understand the properties of underlying media and model communication patterns. As Twitter gains in popularity, it has also become a venue to broadcast rumors and misinformation. We use epidemiological models to characterize information cascades in twitter resulting from both news and rumors. Specifically, we use the SEIZ enhanced epidemic model that explicitly recognizes skeptics to characterize eight events across the world and spanning a range of event types. We demonstrate that our approach is accurate at capturing diffusion in these events. Our approach can be fruitfully combined with other strategies that use content modeling and graph theoretic features to detect (and possibly disrupt) rumors.", "title": "" }, { "docid": "e797fbf7b53214df32d5694527ce5ba3", "text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.", "title": "" }, { "docid": "d3b6ba3e4b8e80c3c371226d7ae6d610", "text": "Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. The purpose of the two reported cases studies is to identify alternative approaches to data analysis and to determine the validity and accuracy of a learning analytics framework and its corresponding student and learning profiles. The findings indicate that educational data for learning analytics is context specific and variables carry different meanings and can have different implications across educational institutions and area of studies. Benefits, concerns, and challenges of learning analytics are critically reflected, indicating that learning analytics frameworks need to be sensitive to idiosyncrasies of the educational institution and its stakeholders.", "title": "" }, { "docid": "4b557c498499c9bbb900d4983cc28426", "text": "Document clustering has not been well received as an information retrieval tool. Objections to its use fall into two main categories: first, that clustering is too slow for large corpora (with running time often quadratic in the number of documents); and second, that clustering does not appreciably improve retrieval.\nWe argue that these problems arise only when clustering is used in an attempt to improve conventional search techniques. However, looking at clustering as an information access tool in its own right obviates these objections, and provides a powerful new access paradigm. We present a document browsing technique that employs document clustering as its primary operation. We also present fast (linear time) clustering algorithms which support this interactive browsing paradigm.", "title": "" }, { "docid": "b59281f7deb759c5126687ab8df13527", "text": "Despite orthogeriatric management, 12% of the elderly experienced PUs after hip fracture surgery. PUs were significantly associated with a low albumin level, history of atrial fibrillation coronary artery disease, and diabetes. The risk ratio of death at 6 months associated with pressure ulcer was 2.38 (95% CI 1.31-4.32%, p = 0.044).\n\n\nINTRODUCTION\nPressure ulcers in hip fracture patients are frequent and associated with a poor outcome. An orthogeriatric management, recommended by international guidelines in hip fracture patients and including pressure ulcer prevention and treatment, could influence causes and consequences of pressure ulcer. However, remaining factors associated with pressure ulcer occurrence and prognostic value of pressure ulcer in hip fracture patients managed in an orthogeriatric care pathway remain unknown.\n\n\nMETHODS\nFrom June 2009 to April 2015, all consecutive patients with hip fracture admitted to a unit for Post-operative geriatric care were evaluated for eligibility. Patients were included if their primary presentation was due to hip fracture and if they were ≥ 70 years of age. Patients were excluded in the presence of pathological fracture or if they were already hospitalized at the time of the fracture. In our unit, orthogeriatric principles are implemented, including a multi-component intervention to improve pressure ulcer prevention and management. Patients were followed-up until 6 months after discharge.\n\n\nRESULTS\nFive hundred sixty-seven patients were included, with an overall 14.4% 6-month mortality (95% CI 11.6-17.8%). Of these, 67 patients (12%) experienced at least one pressure ulcer. Despite orthogeriatric management, pressure ulcers were significantly associated with a low albumin level (RR 0.90, 95% CI 0.84-0.96; p = 0.003) and history of atrial fibrillation (RR 1.91, 95% CI 1.05-3.46; p = 0.033), coronary artery disease (RR 2.16, 95% CI 1.17-3.99; p = 0.014), and diabetes (RR 2.33, 95% CI 1.14-4.75; p = 0.02). A pressure ulcer was associated with 6-month mortality (RR 2.38, 95% CI 1.31-4.32, p = 0.044).\n\n\nCONCLUSION\nIn elderly patients with hip fracture managed in an orthogeriatric care pathway, pressure ulcer remained associated with poorly modifiable risk factors and long-term mortality.", "title": "" }, { "docid": "ef62038ccd234d8d756059071ce34b82", "text": "Integration of architectural datasets concerning historic buildings depends on their interoperability, which has as first step a mapping to a common schema. The paper investigates current approaches and proposes mapping to a CIDOC-CRM extension as the common glue to overcome the fragmentation of datasets provided by large national institutions such as MIBAC in Italy, EH in the UK, and so on, and by EU projects, each one structured according to a different metadata schema. The paper describes the mapping of the MA-CA MIBAC-ICCD schemas, probably the most comprehensive, to CRM.", "title": "" }, { "docid": "90edad4c0a8209065638778e2cf28d1f", "text": "Christopher J.C. Burges Advanced Technologies, Bell Laboratories, Lucent Technologies Holmdel, New Jersey burges@lucent.com We show that the recently proposed variant of the Support Vector machine (SVM) algorithm, known as v-SVM, can be interpreted as a maximal separation between subsets of the convex hulls of the data, which we call soft convex hulls. The soft convex hulls are controlled by choice of the parameter v. If the intersection of the convex hulls is empty, the hyperplane is positioned halfway between them such that the distance between convex hulls, measured along the normal, is maximized; and if it is not, the hyperplane's normal is similarly determined by the soft convex hulls, but its position (perpendicular distance from the origin) is adjusted to minimize the error sum. The proposed geometric interpretation of v-SVM also leads to necessary and sufficient conditions for the existence of a choice of v for which the v-SVM solution is nontrivial.", "title": "" }, { "docid": "37257f51eddbad5d7a151c12083e51a7", "text": "As data rate pushes to 10Gbps and beyond, timing jitter has become one of the major factors that limit the link performance. Thorough understanding of the link jitter characteristics and accurate modeling of their impact on link performance is a must even at early design stage. This paper discusses the characteristics of timing jitter in typical I/O interfaces and overviews various jitter modeling methods proposed in the literature during the past few years. Recommendations are given based on the characteristics of timing jitter and their locations.", "title": "" }, { "docid": "a3421349059058a0c62105951e46435e", "text": "It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.", "title": "" }, { "docid": "040b56db2f85ad43ed9f3f9adbbd5a71", "text": "This study examined the relations between source credibility of eWOM (electronic word of mouth), perceived risk and food products customer's information adoption mediated by argument quality and information usefulness. eWOM has been commonly used to refer the customers during decision-making process for food commodities. Based on this study, we used Elaboration Likelihood Model of information adoption presented by Sussman and Siegal (2003) to check the willingness to buy. Non-probability purposive samples of 300 active participants were taken through questionnaire from several regions of the Republic of China and analyzed the data through structural equation modeling (SEM) accordingly. We discussed that whether eWOM source credibility and perceived risk would impact the degree of information adoption through argument quality and information usefulness. It reveals that eWOM has positively influenced on perceived risk by source credibility to the extent of information adoption and, for this, customers use eWOM for the reduction of the potential hazards when decision making. Companies can make their marketing strategies according to their target towards loyal clients' needs through online foodproduct forums review sites. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c676aaeca813e9636a91a30d1ba82f13", "text": "BACKGROUND\nLateral ankle sprains may result in pain and disability in the short term, decreased sport activity and early retirement from sports in the mid term, and secondary injuries and development of early osteoarthritis to the ankle in the long term.\n\n\nHYPOTHESIS\nThis combined approach to chronic lateral instability and intra-articular lesions of the ankle is safe and in the long term maintains mechanical stability, functional ability, and a good level of sport activity.\n\n\nSTUDY DESIGN\nCase series; Level of evidence, 4.\n\n\nMETHODS\nWe present the long-term outcomes of 42 athletes who underwent ankle arthroscopy and anterior talofibular Broström repair for management of chronic lateral ankle instability. We assessed in all patients preoperative and postoperative anterior drawer test and side-to-side differences, American Orthopaedic Foot and Ankle Society (AOFAS) score, and Kaikkonen grading scales. Patients were asked about return to sport and level of activity. Patients were also assessed for development of degenerative changes to the ankle, and preoperative versus postoperative findings were compared.\n\n\nRESULTS\nThirty-eight patients were reviewed at an average of 8.7 years (range, 5-13 years) after surgery; 4 patients were lost to follow-up. At the last follow-up, patients were significantly improved for ankle laxity, AOFAS scores, and Kaikkonen scales. The mean AOFAS score improved from 51 (range, 32-71) to 90 (range, 67-100), and the mean Kaikkonen score improved from 45 (range, 30-70) to 90 (range, 65-100). According to outcome criteria set preoperatively, there were 8 failures by the AOFAS score and 9 by the Kaikkonen score. Twenty-two (58%) patients practiced sport at the preinjury level, 6 (16%) had changed to lower levels but were still active in less demanding sports (cycling and tennis), and 10 (26%) had abandoned active sport participation although they still were physically active. Six of these patients did not feel safe with their ankle because of the occurrence of new episodes of ankle instability. Of the 27 patients who had no evidence of degenerative changes preoperatively, 8 patients (30%) had radiographic signs of degenerative changes (5 grade I and 3 grade II) of the ankle; 4 of the 11 patients (11%) with preexisting grade I changes remained unchanged, and 7 patients (18%) had progressed to grade II. No correlation was found between osteoarthritis and status of sport activity (P = .72).\n\n\nCONCLUSION\nCombined Broström repair and ankle arthroscopy are safe and allow most patients to return to preinjury daily and sport activities.", "title": "" }, { "docid": "daf997a64778e0e2d5fc1a07ad69b0e4", "text": "A soft-switching single-ended primary inductor converter (SEPIC) is presented in this paper. An auxiliary switch and a clamp capacitor are connected. A coupled inductor and an auxiliary inductor are utilized to obtain ripple-free input current and achieve zero-voltage-switching (ZVS) operation of the main and auxiliary switches. The voltage multiplier technique and active clamp technique are applied to the conventional SEPIC converter to increase the voltage gain, reduce the voltage stresses of the power switches and diode. Moreover, by utilizing the resonance between the resonant inductor and the capacitor in the voltage multiplier circuit, the zero-current-switching operation of the output diode is achieved and its reverse-recovery loss is significantly reduced. The proposed converter achieves high efficiency due to soft-switching commutations of the power semiconductor devices. The presented theoretical analysis is verified by a prototype of 100 kHz and 80 W converter. Also, the measured efficiency of the proposed converter has reached a value of 94.8% at the maximum output power.", "title": "" }, { "docid": "83f1e80a8d4b54184531798559a028d5", "text": "Fast-response and high-sensitivity deep-ultraviolet (DUV) photodetectors with detection wavelength shorter than 320 nm are in high demand due to their potential applications in diverse fields. However, the fabrication processes of DUV detectors based on traditional semiconductor thin films are complicated and costly. Here we report a high-performance DUV photodetector based on graphene quantum dots (GQDs) fabricated via a facile solution process. The devices are capable of detecting DUV light with wavelength as short as 254 nm. With the aid of an asymmetric electrode structure, the device performance could be significantly improved. An on/off ratio of ∼6000 under 254 nm illumination at a relatively weak light intensity of 42 μW cm(-2) is achieved. The devices also exhibit excellent stability and reproducibility with a fast response speed. Given the solution-processing capability of the devices and extraordinary properties of GQDs, the use of GQDs will open up unique opportunities for future high-performance, low-cost DUV photodetectors.", "title": "" } ]
scidocsrr
98ca61dbd2b38ab2aa2ff4a96d6c9b31
Ensemble Robustness of Deep Learning Algorithms
[ { "docid": "2ade63ea07a7c744c9bbfeab40c4e679", "text": "Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.", "title": "" }, { "docid": "77655e3ed587676df9284c78eb36a438", "text": "We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.", "title": "" }, { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" } ]
[ { "docid": "d418b7a6a78e7aacf1a31fbb792b4271", "text": "Multi-document summarization is the process of extracting salient information from a set of source texts and present that information to the user in a condensed form. In this paper, we propose a multi-document summarization system which generates an extractive generic summary with maximum relevance and minimum redundancy by representing each sentence of the input document as a vector of words in Proper Noun, Noun, Verb and Adjective set. Five features, such as TF ISF, Aggregate Cross Sentence Similarity, Title Similarity, Proper Noun and Sentence Length associated with the sentences, are extracted, and scores are assigned to sentences based on these features. Weights that can be assigned to different features may vary depending upon the nature of the document, and it is hard to discover the most appropriate weight for each feature, and this makes generation of a good summary a very tough task without human intelligence. Multi-document summarization problem is having large number of decision parameters and number of possible solutions from which most optimal summary is to be generated. Summary generated may not guarantee the essential quality and may be far from the ideal human generated summary. To address this issue, we propose a population-based multicriteria optimization method with multiple objective functions. Three objective functions are selected to determine an optimal summary, with maximum relevance, diversity, and nov∗Ansamma John Email addresses: ansamma.john@gmail.com (Ansamma John), premjith1190@gmail.com (Premjith P.S), wilsyphilipose@hotmail.com (Wilscy M) Preprint submitted to Elsevier May 30, 2017", "title": "" }, { "docid": "be66c05a023ea123a6f32614d2a8af93", "text": "During the past three decades, the issue of processing spectral phase has been largely neglected in speech applications. There is no doubt that the interest of speech processing community towards the use of phase information in a big spectrum of speech technologies, from automatic speech and speaker recognition to speech synthesis, from speech enhancement and source separation to speech coding, is constantly increasing. In this paper, we elaborate on why phase was believed to be unimportant in each application. We provide an overview of advancements in phase-aware signal processing with applications to speech, showing that considering phase-aware speech processing can be beneficial in many cases, while it can complement the possible solutions that magnitude-only methods suggest. Our goal is to show that phase-aware signal processing is an important emerging field with high potential in the current speech communication applications. The paper provides an extended and up-to-date bibliography on the topic of phase aware speech processing aiming at providing the necessary background to the interested readers for following the recent advancements in the area. Our review expands the step initiated by our organized special session and exemplifies the usefulness of spectral phase information in a wide range of speech processing applications. Finally, the overview will provide some future work directions.", "title": "" }, { "docid": "83958247682e3400b8ce2765130e1386", "text": "Deep neural networks have been shown to perform well in many classical machine learning problems, especially in image classification tasks. However, researchers have found that neural networks can be easily fooled, and they are surprisingly sensitive to small perturbations imperceptible to humans. Carefully crafted input images (adversarial examples) can force a well-trained neural network to provide arbitrary outputs. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. In this paper we propose a new defensive mechanism under the generative adversarial network (GAN) framework. We model the adversarial noise using a generative network, trained jointly with a classification discriminative network as a minimax game. We show empirically that our adversarial network approach works well against black box attacks, with performance on par with state-of-art methods such as ensemble adversarial training and adversarial training with projected gradient descent.", "title": "" }, { "docid": "c3064eb07fb1f7d41958e76b77fc13f7", "text": "This paper presents a novel modeling called stacked timeasynchronous sequential networks (STASNs) for online endof-turn detection. An online end-of-turn detection that determines turn-taking points in a real-time manner is an essential component for human-computer interaction systems. In this study, we use long-range sequential information of multiple time-asynchronous sequential features, such as prosodic, phonetic, and lexical sequential features, to enhance online end-ofturn detection performance. Our key idea is to embed individual sequential features in a fixed-length continuous representation by using sequential networks. This enables us to simultaneously handle multiple time-asynchronous sequential features for end-of-turn detection. STASNs can embed all of the sequential information between a start-of-conversation and the current end-of-utterance in a fixed-length continuous representation that can be directly used for classification by stacking multiple sequential networks. Experiments show that STASNs outperforms conventional modeling with limited sequential information. Furthermore, STASNs with senone bottleneck features extracted using senone-based deep neural networks have superior performance without requiring lexical features decoded by an automatic speech recognition process.", "title": "" }, { "docid": "0af3e6e48d3745b7ea52aae25c26fe10", "text": "MOEA/D is a recently proposed methodology of Multiobjective Evolution Algorithms that decomposes multiobjective problems into a number of scalar subproblems and optimizes them simultaneously. However, classical MOEA/D uses same weight vectors for different shapes of Pareto front. We propose a novel method called Pareto-adaptive weight vectors (paλ) to automatically adjust the weight vectors by the geometrical characteristics of Pareto front. Evaluation on different multiobjective problems confirms that the new algorithm obtains higher hypervolume, better convergence and more evenly distributed solutions than classical MOEA/D and NSGA-II.", "title": "" }, { "docid": "aee115084c027ff5c69198ae481a860d", "text": "Malware is software designed to infiltrate or damage a computer system without the owner's informed consent (e.g., viruses, backdoors, spyware, trojans, and worms). Nowadays, numerous attacks made by the malware pose a major security threat to computer users. Unfortunately, along with the development of the malware writing techniques, the number of file samples that need to be analyzed, named \"gray list,\" on a daily basis is constantly increasing. In order to help our virus analysts, quickly and efficiently pick out the malicious executables from the \"gray list,\" an automatic and robust tool to analyze and classify the file samples is needed. In our previous work, we have developed an intelligent malware detection system (IMDS) by adopting associative classification method based on the analysis of application programming interface (API) execution calls. Despite its good performance in malware detection, IMDS still faces the following two challenges: (1) handling the large set of the generated rules to build the classifier; and (2) finding effective rules for classifying new file samples. In this paper, we first systematically evaluate the effects of the postprocessing techniques (e.g., rule pruning, rule ranking, and rule selection) of associative classification in malware detection, and then, propose an effective way, i.e., CIDCPF, to detect the malware from the \"gray list.\" To the best of our knowledge, this is the first effort on using postprocessing techniques of associative classification in malware detection. CIDCPF adapts the postprocessing techniques as follows: first applying Chi-square testing and Insignificant rule pruning followed by using Database coverage based on the Chi-square measure rule ranking mechanism and Pessimistic error estimation, and finally performing prediction by selecting the best First rule. We have incorporated the CIDCPF method into our existing IMDS system, and we call the new system as CIMDS system. Case studies are performed on the large collection of file samples obtained from the Antivirus Laboratory at Kingsoft Corporation and promising experimental results demonstrate that the efficiency and ability of detecting malware from the \"gray list\" of our CIMDS system outperform popular antivirus software tools, such as McAfee VirusScan and Norton Antivirus, as well as previous data-mining-based detection systems, which employed Naive Bayes, support vector machine, and decision tree techniques. In particular, our CIMDS system can greatly reduce the number of generated rules, which makes it easy for our virus analysts to identify the useful ones.", "title": "" }, { "docid": "301cf9a13184f2e7587f16b3de16222d", "text": "Recently, highly accurate positioning devices enable us to provide various types of location-based services. On the other hand, because position data obtained by such devices include deeply personal information, protection of location privacy is one of the most significant issues of location-based services. Therefore, we propose a technique to anonymize position data. In our proposed technique, the psrsonal user of a location-based service generates several false position data (dummies) sent to the service provider with the true position data of the user. Because the service provider cannot distinguish the true position data, the user’s location privacy is protected. We conducted performance study experiments on our proposed technique using practical trajectory data. As a result of the experiments, we observed that our proposed technique protects the location privacy of users.", "title": "" }, { "docid": "049def2d879d0b873132660b0b856443", "text": "This report explores the relationship between narcissism and unethical conduct in an organization by answering two questions: (1) In what ways does narcissism affect an organization?, and (2) What is the relationship between narcissism and the financial industry? Research suggests the overall conclusion that narcissistic individuals directly influence the identity of an organization and how it behaves. Ways to address these issues are shown using Enron as a case study example.", "title": "" }, { "docid": "be29160b73b9ab727eb760a108a7254a", "text": "Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.", "title": "" }, { "docid": "de45682fcc57257365ae2a35978b8694", "text": "Colloidal particles play an important role in various areas of material and pharmaceutical sciences, biotechnology, and biomedicine. In this overview we describe micro- and nano-particles used for the preparation of polyelectrolyte multilayer capsules and as drug delivery vehicles. An essential feature of polyelectrolyte multilayer capsule preparations is the ability to adsorb polymeric layers onto colloidal particles or templates followed by dissolution of these templates. The choice of the template is determined by various physico-chemical conditions: solvent needed for dissolution, porosity, aggregation tendency, as well as release of materials from capsules. Historically, the first templates were based on melamine formaldehyde, later evolving towards more elaborate materials such as silica and calcium carbonate. Their advantages and disadvantages are discussed here in comparison to non-particulate templates such as red blood cells. Further steps in this area include development of anisotropic particles, which themselves can serve as delivery carriers. We provide insights into application of particles as drug delivery carriers in comparison to microcapsules templated on them.", "title": "" }, { "docid": "b3cb33f75e6bf54ede1b010001b1c725", "text": "In this work, a novel three-dimensional superposition algorithm for photon dose calculation is presented. The dose calculation is performed as a superposition of pencil beams, which are modified based on tissue electron densities. The pencil beams have been derived from Monte Carlo simulations, and are separated into lateral and depth-directed components. The lateral component is modeled using exponential functions, which allows accurate modeling of lateral scatter in heterogeneous tissues. The depth-directed component represents the total energy deposited on each plane, which is spread out using the lateral scatter functions. Finally, convolution in the depth direction is applied to account for tissue interface effects. The method can be used with the previously introduced multiple-source model for clinical settings. The method was compared against Monte Carlo simulations in several phantoms including lung- and bone-type heterogeneities. Comparisons were made for several field sizes for 6 and 18 MV energies. The deviations were generally within (2%, 2 mm) of the field central axis d(max). Significantly larger deviations (up to 8%) were found only for the smallest field in the lung slab phantom for 18 MV. The presented method was found to be accurate in a wide range of conditions making it suitable for clinical planning purposes.", "title": "" }, { "docid": "322f6321bc34750344064d474206fddb", "text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.", "title": "" }, { "docid": "7d024e9ccf20923ade005970ddef1bcc", "text": "Mamdani Fuzzy Model is an important technique in Computational Intelligence (CI) study. This paper presents an implementation of a supervised learning method based on membership function training in the context of Mamdani fuzzy models. Specifically, auto zoom function of a digital camera is modelled using Mamdani technique. The performance of control method is verified through a series of simulation and numerical results are provided as illustrations. Keywords-component: Mamdani fuzzy model, fuzzy logic, auto zoom, digital camera", "title": "" }, { "docid": "422b5a17be6923df4b90eaadf3ed0748", "text": "Hate speech is currently of broad and current interest in the domain of social media. The anonymity and flexibility afforded by the Internet has made it easy for users to communicate in an aggressive manner. And as the amount of online hate speech is increasing, methods that automatically detect hate speech is very much required. Moreover, these problems have also been attracting the Natural Language Processing and Machine Learning communities a lot. Therefore, the goal of this paper is to look at how Natural Language Processing applies in detecting hate speech. Furthermore, this paper also applies a current technique in this field on a dataset. As neural network approaches outperforms existing methods for text classification problems, a deep learning model has been introduced, namely the Convolutional Neural Network. This classifier assigns each tweet to one of the categories of a Twitter dataset: hate, offensive language, and neither. The performance of this model has been tested using the accuracy, as well as looking at the precision, recall and F-score. The final model resulted in an accuracy of 91%, precision of 91%, recall of 90% and a F-measure of 90%. However, when looking at each class separately, it should be noted that a lot of hate tweets have been misclassified. Therefore, it is recommended to further analyze the predictions and errors, such that more insight is gained on the misclassification.", "title": "" }, { "docid": "8aa50ef9e3a774294f7b4b2aaa4664f8", "text": "Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.", "title": "" }, { "docid": "e729189474d9b5e40d3af2c9caba47d9", "text": "In the process of online storytelling, individual users create and consume highly diverse content that contains a great deal of implicit beliefs and not plainly expressed narrative. It is hard to manually detect these implicit beliefs, intentions and moral foundations of the writers. We study and investigate two different tasks, each of which reflect the difficulty of detecting an implicit user’s knowledge, intent or belief that may be based on writer’s moral foundation: 1) political perspective detection in news articles 2) identification of informational vs. conversational questions in community question answering (CQA) archives and. In both tasks we first describe new interesting annotated datasets and make the datasets publicly available. Second, we compare various classification algorithms, and show the differences in their performance on both tasks. Third, in political perspective detection task we utilize a narrative representation language of local press to identify perspective differences between presumably neutral American and British press. IMPLICIT DIMENSION IDENTIFICATION IN USER-GENERATED TEXT 3 Implicit Dimension Identification in User-Generated Text with LSTM Networks", "title": "" }, { "docid": "4e50e68e099ab77aedcb0abe8b7a9ca2", "text": "In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input–multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the “APoZ”-based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.", "title": "" }, { "docid": "72d0731d0fc4f32b116afa207c9aefdd", "text": "Internet of Things (IoT) is based on a wireless network that connects a huge number of smart objects, products, smart devices, and people. It has another name which is Web of Things (WoT). IoT uses standards and protocols that are proposed by different standardization organizations in message passing within session layer. Most of the IoT applications protocols use TCP or UDP for transport. XMPP, CoAP, DDS, MQTT, and AMQP are grouped of the widely used application protocols. Each one of these protocols have specific functions and are used in specific way to handle some issues. This paper provides an overview for one of the most popular application layer protocols that is MQTT, including its architecture, message format, MQTT scope, and Quality of Service (QoS) for the MQTT levels. MQTT works mainly as a pipe for binary data and provides a flexibility in communication patterns. It is designed to provide a publish-subscribe messaging protocol with most possible minimal bandwidth requirements. MQTT uses Transmission Control Protocol (TCP) for transport. MQTT is an open standard, giving a mechanisms to asynchronous communication, have a range of implementations, and it is working on IP.", "title": "" }, { "docid": "6c97853046dd2673d9c83990119ef43c", "text": "Atomic actions (or transactions) are useful for coping with concurrency and failures. One way of ensuring atomicity of actions is to implement applications in terms of atomic data types: abstract data types whose objects ensure serializability and recoverability of actions using them. Many atomic types can be implemented to provide high levels of concurrency by taking advantage of algebraic properties of the type's operations, for example, that certain operations commute. In this paper we analyze the level of concurrency permitted by an atomic type. We introduce several local constraints on individual objects that suffice to ensure global atomicity of actions; we call these constraints local atomicity properties. We present three local atomicity properties, each of which is optimal: no strictly weaker local constraint on objects suffices to ensure global atomicity for actions. Thus, the local atomicity properties define precise limits on the amount of concurrency that can be permitted by an atomic type.", "title": "" }, { "docid": "58156df07590448d89c2b8d4a46696ad", "text": "Gene PmAF7DS confers resistance to wheat powdery mildew (isolate Bgt#211 ); it was mapped to a 14.6-cM interval ( Xgwm350 a– Xbarc184 ) on chromosome 7DS. The flanking markers could be applied in MAS breeding. Wheat powdery mildew (Pm) is caused by the biotrophic pathogen Blumeria graminis tritici (DC.) (Bgt). An ongoing threat of breakdown of race-specific resistance to Pm requires a continuous effort to discover new alleles in the wheat gene pool. Developing new cultivars with improved disease resistance is an economically and environmentally safe approach to reduce yield losses. To identify and characterize genes for resistance against Pm in bread wheat we used the (Arina × Forno) RILs population. Initially, the two parental lines were screened with a collection of 61 isolates of Bgt from Israel. Three Pm isolates Bgt#210 , Bgt#211 and Bgt#213 showed differential reactions in the parents: Arina was resistant (IT = 0), whereas Forno was moderately susceptible (IT = −3). Isolate Bgt#211 was then used to inoculate the RIL population. The segregation pattern of plant reactions among the RILs indicates that a single dominant gene controls the conferred resistance. A genetic map of the region containing this gene was assembled with DNA markers and assigned to the 7D physical bin map. The gene, temporarily designated PmAF7DS, was located in the distal region of chromosome arm 7DS. The RILs were also inoculated with Bgt#210 and Bgt#213. The plant reactions to these isolates showed high identity with the reaction to Bgt#211, indicating the involvement of the same gene or closely linked, but distinct single genes. The genomic location of PmAF7DS, in light of other Pm genes on 7DS is discussed.", "title": "" } ]
scidocsrr
8fc7673a8170208caef26aaf8e5d9418
Multimodal Human Activity Recognition for Industrial Manufacturing Processes in Robotic Workcells
[ { "docid": "8c70f1af7d3132ca31b0cf603b7c5939", "text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.", "title": "" } ]
[ { "docid": "df4fbaf83a761235c5d77654973b5eb1", "text": "We add to the discussion of how to assess the creativity of programs which generate artefacts such as poems, theorems, paintings, melodies, etc. To do so, we first review some existing frameworks for assessing artefact generation programs. Then, drawing on our experience of building both a mathematical discovery system and an automated painter, we argue that it is not appropriate to base the assessment of a system on its output alone, and that the way it produces artefacts also needs to be taken into account. We suggest a simple framework within which the behaviour of a program can be categorised and described which may add to the perception of creativity in the system.", "title": "" }, { "docid": "36419bc98c150698453859d2c781d8ee", "text": "Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined and new entries since January 2010 are reviewed. Copyright # 2010 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "f3d03c33d011d4bdd350f558cae922e1", "text": "In a laboratory environment, the practicality and scope of experiments is often constrained by time and financial resources. In the digital hardware design arena, the development of programmable logic devices, such as field--programmable gate arrays (FPGAs), has greatly enhanced the student’s ability to design and synthesize complete systems within a short period of time and at a reasonable cost. Unfortunately, analog circuit design and signal processing have not enjoyed similar advances. However, new advances in field--programmable analog arrays (FPAAs) have created many new opportunities in analog circuit design and signal processing education. This paper will investigate the usefulness of these FPAAs as viable pedagogical tools. It will also explore the new methodologies in analog signal processing education that are available when FPAAs are brought into the classroom.", "title": "" }, { "docid": "9f328d46c30cac9bb210582113683432", "text": "Clinical and hematologic studies of 16 adult patients whose leukemic cells had Tcell markers are reported from Japan, where the incidence of various lymphoproliferative diseases differs considerably from that in Western countries. Leukemic cells were studied by cytotoxicity tests with specific antisera against human T (ATS) and B cells (ABS) in addition to the usual Tand B-cell markers (E rosette, EAC rosette, and surface immunoglobulins). Characteristics of the clinical and hematologic findings were as follows: (1) onset in adulthood; (2) subacute or chronic leukemia with rapidly progressive terminal course; (3) leukemic cells killed by ATS and forming E rosettes; (4) Icykemic cells not morphologically monotonous and frequent cells with deeply indented or lobulated nuclei; (5) frequent skin involvement (9 patients); (6) common lymphadenopathy and hepatosplenomegaly; (7) no mediastinal mass; and, the most striking finding, (8) the clustering of the patients’ birthplaces, namely, 13 patients born in Kyushu. The relation. ship between our cases and other subacute or chronic adult T-ceIl malignancies such as chronic lymphocytic leukemia of T-cell origin, prolymphocytic leukemia with 1cell properties, S#{233}zarysyndrome, and mycosis fungoides is discussed.", "title": "" }, { "docid": "5d8aaba4da6c6aebf08d241484451ea8", "text": "The lack of a friendly and flexible operational model of landside operations motivated the creation of a new simulation model adaptable to various airport configurations for estimating the time behavior of passenger and baggage flows, the elements’ capacities and the delays in a generic airport terminal. The validation of the model has been conducted by comparison with the results of previous research about the average behavior of the future Athens airport. In the mean time the proposed model provided interesting dynamical results about both passenger and baggage movements in the system.", "title": "" }, { "docid": "1d8667d40c6e6cd5881cf4fa0b788f10", "text": "While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.1", "title": "" }, { "docid": "08e21e7a4e944f06c4a4502dcdb3d854", "text": "Numquam ponenda est pluralitas sine necessitate 'Plurality should never be proposed unless needed' William of Occam Classification lies at the heart of both human and machine intelligence. Deciding what letter, word, or image has been presented to our senses, recognizing faces or voices, sorting mail, assigning grades to homeworks, are all examples of assigning a class or category to an input. The potential challenges of this task are highlighted by the fabulist Jorge Luis Borges (1964), who imagined classifying animals into: (a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel's hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance. While many language processing tasks can be productively viewed as tasks of classification, the classes are luckily far more practical than those of Borges. In this chapter we present two general algorithms for classification, demonstrated on one important set of classification problems: text categorization, the task of classifying text categorization an entire text by assigning it a label drawn from some set of labels. We focus on one common text categorization task, sentiment analysis, the ex-sentiment analysis traction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Automatically extracting consumer sentiment is important for marketing of any sort of product, while measuring public sentiment is important for politics and also for market prediction. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants,. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... …", "title": "" }, { "docid": "4107fe17e6834f96a954e13cbb920f78", "text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.", "title": "" }, { "docid": "758eb7a0429ee116f7de7d53e19b3e02", "text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.", "title": "" }, { "docid": "0bcc5beb8bada39446c1dd32d0a65dec", "text": "Clustering is a powerful tool in data analysis, but it is often difficult to find a grouping that aligns with a user’s needs. To address this, several methods incorporate constraints obtained from users into clustering algorithms, but unfortunately do not apply to hierarchical clustering. We design an interactive Bayesian algorithm that incorporates user interaction into hierarchical clustering while still utilizing the geometry of the data by sampling a constrained posterior distribution over hierarchies. We also suggest several ways to intelligently query a user. The algorithm, along with the querying schemes, shows promising results on real data.", "title": "" }, { "docid": "840712346d5b8896e37966bd9084cb2a", "text": "In thermodynamic terms, ecosystems are machines supplied with energy from an external source, usually the sun. When the input of energy to an ecosystem is exactly equal to its total output of energy, the state of equilibrium which exists is a special case of the First Law of Thermodynamics. The Second Law is relevant oo. It implies that in every spontaneous process, physical or chemical, the production of 'useful' energy, which could be harnessed in a form such as mechanical work, must be accompanied by a simultaneous 'waste' of heat. No biological system can break or evade this law. The heat produced by a respiring cell is an inescapable component of cellular metabolism, the cost which Nature has to pay for creating biological order out of physical chaos in the environment of plants and animals. Dividing the useful energy of a thermodynamic process by the total energy involved gives a figure for the efficiency of the process, and this procedure has been widely used to analyse the flow of energy in ecosystems. For example, the efficiency with which a stand of plants produces dry matter by photosynthesis can be defined as the ratio of chemical energy stored in the assimilates to radiant energy absorbed by foliage during the period of assimilation. The choice of absorbed energy as a base for calculating efficiency is convenient but arbitrary. To derive an efficiency depending on the environment of a particular site as well as oil the nature of the vegetation, dry matter production can be related to the receipt of solar energy at the top of the earth's atmosphere. This exercise was attempted by Professor William Thomson, later Lord Kelvin, in 1852. 'The author estimates the mechanical value of the solar heat which, were none of it absorbed by the atmosphere, would fall annually on each square foot of land, at 530 000 000 foot pounds; and infers that probably a good deal more, 1/1000 of the solar heat, which actually falls on growing plants, is converted into mechanical effect.' Outside the earth's atmosphere, a surface kept at right angles to the sun's rays receives energy at a mean rate of 1360 W m-2 or 1f36 kJ m-2 s-1, a figure known as the solar constant. As the energy stored by plants is about 17 kJ per gram of dry matter, the solar constant is equivalent to the production of dry matter at a rate of about 1 g m-2 every 12 s, 7 kg m-2 per day, or 2 6 t m-2 year-'. The annual yield of agricultural crops ranges from a maximum of 30-60 t ha-' in field experiments to less than I t ha-' in some forms of subsistence farming. When these rates are expressed as a fraction of the integrated solar constant, the efficiencies of agricultural systems lie between 0-2 and 0 004%, a range including Kelvin's estimate of 0-1%. Conventional estimates of efficiency interms of the amount of solar radiation incident at the earth's surface provide ecologists and agronomists with a method for comparing plant productivity under different systems of land use and management and in different * Opening paper read at IBP/UNESCO Meeting on Productivity ofTropical Ecosystems, Makerere University, Uganda, September 1970.", "title": "" }, { "docid": "14b6af9d7199f724112021f81694c7ea", "text": "Much research indicates that East Asians, more than Americans, explain events with reference to the context. The authors examined whether East Asians also attend to the context more than Americans do. In Study 1, Japanese and Americans watched animated vignettes of underwater scenes and reported the contents. In a subsequent recognition test, they were shown previously seen objects as well as new objects, either in their original setting or in novel settings, and then were asked to judge whether they had seen the objects. Study 2 replicated the recognition task using photographs of wildlife. The results showed that the Japanese (a) made more statements about contextual information and relationships than Americans did and (b) recognized previously seen objects more accurately when they saw them in their original settings rather than in the novel settings, whereas this manipulation had relatively little effect on Americans.", "title": "" }, { "docid": "0d722ecc5bd9de4151efa09b55de7b8a", "text": "As international research studies become more commonplace, the importance of developing multilingual research instruments continues to increase and with it that of translated materials. It is therefore not unexpected that assessing the quality of translated materials (e.g., research instruments, questionnaires, etc.) has become essential to cross-cultural research, given that the reliability and validity of the research findings crucially depend on the translated instruments. In some fields (e.g., public health and medicine), the quality of translated instruments can also impact the effectiveness and success of interventions and public campaigns. Back-translation (BT) is a commonly used quality assessment tool in cross-cultural research. This quality assurance technique consists of (a) translation (target text [TT1]) of the source text (ST), (b) translation (TT2) of TT1 back into the source language, and (c) comparison of TT2 with ST to make sure there are no discrepancies. The accuracy of the BT with respect to the source is supposed to reflect equivalence/accuracy of the TT. This article shows how the use of BT as a translation quality assessment method can have a detrimental effect on a research study and proposes alternatives to BT. One alternative is illustrated on the basis of the translation and quality assessment methods used in a research study on hearing loss carried out in a border community in the southwest of the United States.", "title": "" }, { "docid": "3be6521b5100acfd1f006d4a9ffb7fb2", "text": "Credit scoring model development became a very important issue as the credit industry has many competitions and bad debt problems. Therefore, most credit scoring models have been widely studied in the areas of statistics to improve the accuracy of credit scoring models during the past few years. In order to solve the classification and decrease the Type I error of credit scoring model, this paper presents a reassigning credit scoring model (RCSM) involving two stages. The classification stage is constructing an ANN-based credit scoring model, which classifies applicants with accepted (good) or rejected (bad) credits. The reassign stage is trying to reduce the Type I error by reassigning the rejected good credit applicants to the conditional accepted class by using the CBR-based classification technique. To demonstrate the effectiveness of proposed model, RCSM is performed on a credit card dataset obtained from UCI repository. As the results indicated, the proposed model not only proved more accurate credit scoring than other four common used approaches, but also contributes to increase business revenue by decreasing the Type I and Type II error of scoring system. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "66056b4d6cd15282e676a836cc31f8de", "text": "In this paper, we propose a new approach for cross-scenario clothing retrieval and fine-grained clothing style recognition. The query clothing photos captured by cameras or other mobile devices are filled with noisy background while the product clothing images online for shopping are usually presented in a pure environment. We tackle this problem by two steps. Firstly, a hierarchical super-pixel merging algorithm based on semantic segmentation is proposed to obtain the intact query clothing item. Secondly, aiming at solving the problem of clothing style recognition in different scenarios, we propose sparse coding based on domain-adaptive dictionary learning to improve the accuracy of the classifier and adaptability of the dictionary. In this way, we obtain fine-grained attributes of the clothing items and use the attributes matching score to re-rank the retrieval results further. The experiment results show that our method outperforms the state-of-the-art approaches. Furthermore, we build a well labeled clothing dataset, where the images are selected from 1.5 billion product clothing images.", "title": "" }, { "docid": "83637dc7109acc342d50366f498c141a", "text": "With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice.", "title": "" }, { "docid": "9e722237e6bf8b046a02d1c43f82327a", "text": "For the alarming growth in consumer credit in recent years, consumer credit scoring is the term used to describe methods of classifying credits’ applicants as `good' and `bad' risk classes.. In the current paper, we use the logistic regression as well as the discriminant analysis in order to develop predictive models allowing to distinguish between “good” and “bad” borrowers. The data have been collected from a commercial Tunisian bank over a 3-year period, from 2010 to 2012. These data consist of four selected and ordered variables. By comparing the respective performances of the Logistic Regression (LR) and the Discriminant Analysis (DA), we notice that the LR model yields a 89% good classification rate in predicting customer types and then, a significantly low error rate (11%), as compared with the DA approach (where the good classification rate is only equal to 68.49%, leading to a significantly high error rate, i.e. 31.51%). © 2016 AESS Publications. All Rights Reserved.", "title": "" }, { "docid": "162f080444935117c5125ae8b7c3d51e", "text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1", "title": "" }, { "docid": "744edec2b92f84dda850de14ddc09972", "text": "Computing systems are becoming increasingly parallel and heterogeneous, and therefore new applications must be capable of exploiting parallelism in order to continue achieving high performance. However, targeting these emerging devices often requires using multiple disparate programming models and making decisions that can limit forward scalability. In previous work we proposed the use of domain-specific languages (DSLs) to provide high-level abstractions that enable transformations to high performance parallel code without degrading programmer productivity. In this paper we present a new end-to-end system for building, compiling, and executing DSL applications on parallel heterogeneous hardware, the Delite Compiler Framework and Runtime. The framework lifts embedded DSL applications to an intermediate representation (IR), performs generic, parallel, and domain-specific optimizations, and generates an execution graph that targets multiple heterogeneous hardware devices. Finally we present results comparing the performance of several machine learning applications written in OptiML, a DSL for machine learning that utilizes Delite, to C++ and MATLAB implementations. We find that the implicitly parallel OptiML applications achieve single-threaded performance comparable to C++ and outperform explicitly parallel MATLAB in nearly all cases.", "title": "" } ]
scidocsrr
31bcccc80040c035936b7ac5824c0b39
Interactive 3D reconstruction from multiple images: A primitive-based approach
[ { "docid": "4a5cfc32cccc96c49739cc49f311ddb4", "text": "We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometrybased and image-based modeling and rendering techniques, has two components. The rst component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is e ective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current imagebased modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.", "title": "" }, { "docid": "068407ac06ab995b2c9710c08a130dc3", "text": "I review current approaches to structure from motion (SFM) and suggest a framework for designing new algorithms. The discussion focuses on reconstruction rather than on correspondence and on algorithms reconstructing from many images. I argue that it is important to base experiments and algorithm design on theoretical analyses of algorithm behavior and on an understanding of the intrinsic, algorithm-independent properties of SFM optimal estimation. I propose new theoretical analyses as examples, which suggest a range of experimental questions about current algorithms as well as new types of algorithms. The paper begins with a review of several of the important multi-image-based approaches to SFM, including optimization, fusing (e.g., Kalman filtering), projective methods, and invariant-based algorithms. I suggest that optimization by means of general minimization techniques needs to be supplemented by a theoretical understanding of the SFM least-squares error surface. I argue that fusing approaches are essentially no more robust than algorithms reconstructing from a small number of images and advocate experiments to determine the limitations of fusing. I also propose that fusing may be one of the best reconstruction strategies in situations where few-image algorithms give reasonable results, and suggest that an experimental understanding of the properties of few-image algorithms is important for designing good fusing methods. I emphasize the advantages of an approach based on fusing image-pair reconstructions. With regard to the projective approach, I argue that its trade-off of simplicity versus accuracy/robustness needs more careful experimental examination, and I advocate more research on the effects of calibration error on Euclidean reconstruction. I point out the relative lack of research on adapting Euclidean approaches to deal with incomplete knowledge of the calibration. I argue that invariant-based algorithms could be more nonrobust and inaccurate, and not necessarily much faster, than an approach fusing two-image optimizations. Based on recent results showing that two-image reconstructions are nearly as accurate as multi-image ones, I suggest that the authors of invariants methods conduct careful comparisons of their algorithms to two-image-based results. The remainder of the paper discusses the issues involved in designing a generally applicable SFM algorithm. I argue that current SFM algorithms perform well only in restricted domains, and that different types of algorithms do well on quite different types of sequences. I present examples of three domains that are important in applications and describe three types of", "title": "" } ]
[ { "docid": "48168ed93d710d3b85b7015f2c238094", "text": "ion and hierarchical information processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such flexibility in artificial systems is challenging, even with more and more computational power. Here, we investigate the hypothesis that abstraction and hierarchical information processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded-optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.", "title": "" }, { "docid": "b56cd1e9392976f48dddf7d3a60c5aef", "text": "This paper presents a novel single-switch converter with high voltage gain and low voltage stress for photovoltaic applications. The proposed converter is composed of coupled-inductor and switched-capacitor techniques to achieve high step-up conversion ratio without adopting extremely high duty ratio or high turns ratio. The capacitors are charged in parallel and discharged in series by the coupled inductor to achieve high step-up voltage gain with an appropriate duty ratio. Besides, the voltage stress on the main switch is reduced with a passive clamp circuit, and the conduction losses are reduced. In addition, the reverse-recovery problem of the diode is alleviated by a coupled inductor. Thus, the efficiency can be further improved. The operating principle, steady state analysis and design of the proposed single switch converter with high step-up gain is carried out. A 24 V input voltage, 400 V output, and 300W maximum output power integrated converter is designed and analysed using MATLAB simulink. Simulation result proves the performance and functionality of the proposed single switch DC-DC converter for validation.", "title": "" }, { "docid": "6d262d30db4d6db112f40e5820393caf", "text": "This study sought to examine the effects of service quality and customer satisfaction on the repurchase intentions of customers of restaurants on University of Cape Coast Campus. The survey method was employed involving a convenient sample of 200 customers of 10 restaurants on the University of Cape Coast Campus. A modified DINESERV scale was used to measure customers’ perceived service quality. The results of the study indicate that four factors accounted for 50% of the variance in perceived service quality, namely; responsivenessassurance, empathy-equity, reliability and tangibles. Service quality was found to have a significant effect on customer satisfaction. Also, both service quality and customer satisfaction had significant effects on repurchase intention. However, customer satisfaction could not moderate the effect of service quality on repurchase intention. This paper adds to the debate on the dimensions of service quality and provides evidence on the effects of service quality and customer satisfaction on repurchase intention in a campus food service context.", "title": "" }, { "docid": "3e570e415690daf143ea30a8554b0ac8", "text": "Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature.", "title": "" }, { "docid": "4c05d5add4bd2130787fd894ce74323a", "text": "Although semi-supervised model can extract the event mentions matching frequent event patterns, it suffers much from those event mentions, which match infrequent patterns or have no matching pattern. To solve this issue, this paper introduces various kinds of linguistic knowledge-driven event inference mechanisms to semi-supervised Chinese event extraction. These event inference mechanisms can capture linguistic knowledge from four aspects, i.e. semantics of argument role, compositional semantics of trigger, consistency on coreference events and relevant events, to further recover missing event mentions from unlabeled texts. Evaluation on the ACE 2005 Chinese corpus shows that our event inference mechanisms significantly outperform the refined state-of-the-art semi-supervised Chinese event extraction system in F1-score by 8.5%.", "title": "" }, { "docid": "093b6753e5d33f5fe72ac8ba45fca7c5", "text": "OBJECTIVE\nEffective treatments for obsessive-compulsive disorder (OCD) exist, but additional treatment options are needed. The effectiveness of 8 sessions of acceptance and commitment therapy (ACT) for adult OCD was compared with progressive relaxation training (PRT).\n\n\nMETHOD\nSeventy-nine adults (61% female) diagnosed with OCD (mean age = 37 years; 89% Caucasian) participated in a randomized clinical trial of 8 sessions of ACT or PRT with no in-session exposure. The following assessments were completed at pretreatment, posttreatment, and 3-month follow-up by an assessor who was unaware of treatment conditions: Yale-Brown Obsessive Compulsive Scale (Y-BOCS), Beck Depression Inventory-II, Quality of Life Scale, Acceptance and Action Questionnaire, Thought Action Fusion Scale, and Thought Control Questionnaire. Treatment Evaluation Inventory was completed at posttreatment.\n\n\nRESULTS\nACT produced greater changes at posttreatment and follow-up over PRT on OCD severity (Y-BOCS: ACT pretreatment = 24.22, posttreatment = 12.76, follow-up = 11.79; PRT pretreatment = 25.4, posttreatment = 18.67, follow-up = 16.23) and produced greater change on depression among those reporting at least mild depression before treatment. Clinically significant change in OCD severity occurred more in the ACT condition than PRT (clinical response rates: ACT posttreatment = 46%-56%, follow-up = 46%-66%; PRT posttreatment = 13%-18%, follow-up = 16%-18%). Quality of life improved in both conditions but was marginally in favor of ACT at posttreatment. Treatment refusal (2.4% ACT, 7.8% PRT) and dropout (9.8% ACT, 13.2% PRT) were low in both conditions.\n\n\nCONCLUSIONS\nACT is worth exploring as a treatment for OCD.", "title": "" }, { "docid": "ea84c28e02a38caff14683681ea264d7", "text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression. While local anomaly is typically detected as a 3D pattern matching problem, we are more interested in global anomaly that involves multiple normal events interacting in an unusual manner such as car accident. To simultaneously detect local and global anomalies, we formulate the extraction of normal interactions from training video as the problem of efficiently finding the frequent geometric relations of the nearby sparse spatio-temporal interest points. A codebook of interaction templates is then constructed and modeled using Gaussian process regression. A novel inference method for computing the likelihood of an observed interaction is also proposed. As such, our model is robust to slight topological deformations and can handle the noise and data unbalance problems in the training data. Simulations show that our system outperforms the main state-of-the-art methods on this topic and achieves at least 80% detection rates based on three challenging datasets.", "title": "" }, { "docid": "a507756d6663f3ae3e40337a03d00aaa", "text": "This article presents a system developed using graphic programming on LabVIEW using image processing and particle analysis so as to indicate increase in vehicular density on particularly traffic-prone roads and hence alert necessary officials of the same instantly. This system primarily focusses on the vehicular density and sets specific thresholds in accordance with the time of the day to give almost accurate results. It also displays an application of computer vision for traffic flow management and road traffic analysis. The method mentioned provides the functionality of alert during times of road clogging and will hence; ensure immediate rectification of the same.", "title": "" }, { "docid": "3b9afbdf3cd66214f0b5511b7b6c9457", "text": "Current traffic light systems use a fixed time delay for different traffic directions and do follow a particular cycle while switching from one signal to another. This creates unwanted congestion during peak hours, loss of man-hours and eventually decline in productivity. In addition to this, the current traffic light systems encourage extortion by corrupt traffic officials as commuters often violate traffic rules because of the insufficient time allocated to their lanes or may want to avoid a long waiting period for their lanes to come up. This research is aimed at tackling the afore-mentioned problems by adopting a density based traffic control approach using Jakpa Junction, one of the busiest junctions in Delta State, Nigeria as a case study. The developed system uses a microcontroller of PIC89C51 microcontroller duly interfaced with sensors. The signal timing changes automatically based on the traffic density at the junction, thereby, avoiding unnecessary waiting time at the junction. The sensors used in this project were infra-red (IR) sensors and photodiodes which were placed in a Line of Sight configuration across the loads to detect the density of the traffic signal. The density of the vehicles is measured in three zones i.e., low, medium and high based on which timings were allotted accordingly. The developed system has proven to be smart and intelligent and capable of curbing incidences of traffic malpractices and inefficiencies that have been the bane of current traffic congestion control systems in emerging cities of the third world.", "title": "" }, { "docid": "115ed03ccee62fafc1606e6f6fdba1ce", "text": "High voltage SF6 circuit breaker must meet the breaking requirement for large short-circuit current, and ensure absence of breakdown after breaking small current. A 126kV high voltage SF6 circuit breaker was used as the research object in this paper. Based on the calculation results of non-equilibrium arc plasma material parameters, the distribution of pressure, temperature and density were calculated during the breaking progress. The electric field distribution was calculated in the course of flow movement, considering the influence of space charge on dielectric voltage. The change rule of the dielectric recovery progress was given based on the stream theory. The dynamic breakdown test circuit was built to measure the values of breakdown voltage under different open distance. The simulation results and experimental data are analyzed and the results show that: 1) Dielectric recovery speed (175kV/ms) is significantly faster than the voltage recovery rate (37.7kV/ms) during the arc extinguishing process. 2) The shorter the small current arcing time, the smaller the breakdown margin, so it is necessary to keep the arcing time longer than 0.5ms to ensure a large breakdown margin. 3) The calculated results are in good agreement with the experimental results. Since the breakdown voltage is less than the TRV in some test points, restrike may occur within 0.5ms after breaking, so arc extinguishment should be avoid in this time range.", "title": "" }, { "docid": "f4438c21802e244d4021ef3390aecf89", "text": "Ship detection has been playing a significant role in the field of remote sensing for a long time but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection and the redundancy of detection region. In order to solve such problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ship in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving the problem resulted from the narrow width of the ship. Compared with previous multi-scale detectors such as Feature Pyramid Network (FPN), DFPN builds the high-level semantic feature-maps for all scales by means of dense connections, through which enhances the feature propagation and encourages the feature reuse. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multi-scale ROI Align for the purpose of maintaining the completeness of semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on RDFPN representation has a state-of-the-art performance.", "title": "" }, { "docid": "9798859ddb2d29fa461dab938c5183bb", "text": "The emergence of the extended manufacturing enterprise, a globally dispersed collection of strategically aligned organizations, has brought new attention to how organizations coordinate the flow of information and materials across their w supply chains. This paper explores and develops the concept of enterprise logistics Greis, N.P., Kasarda, J.D., 1997. Ž . x Enterprise logistics in the information age. California Management Review 39 3 , 55–78 as a tool for integrating the logistics activities both within and between the strategically aligned organizations of the extended enterprise. Specifically, this paper examines the fit between an organization’s enterprise logistics integration capabilities and its supply chain structure. Using a configurations approach, we test whether globally dispersed network organizations that adopt enterprise logistics practices are able to achieve higher levels of organizational performance. Results indicate that enterprise logistics is a necessary tool for the coordination of supply chain operations that are geographically dispersed around the world. However, for a pure network structure, a high level of enterprise logistics integration alone does not guarantee improved organizational performance. The paper ends with a discussion of managerial implications and directions for future research. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "c9ae987a050aa063fcd7e6f0ee971b9b", "text": "Smartphones are getting increasingly high-performance with advances in mobile processors and larger main memories to support feature-rich applications. However, the storage subsystem has always been a prohibitive factor that slows down the pace of reaching even higher performance while maintaining good user experience. Despite today's smartphones are equipped with larger-than-ever main memories, they consume more energy and still run out of memory. But the slow NAND flash based storage vetoes the possibility of swapping---an important technique to extend main memory---and leaves a system that constantly terminates user applications under memory pressure.\n In this paper, we revisit swapping for smartphones with fast, byte-addressable, non-volatile memory (NVM) technologies. Instead of using flash, we build the swap area with NVM, to allow high performance without sacrificing user experience. Based on NVM's high performance and byte-addressability, we show that a copy-on-write swap-in scheme can achieve even better performance by avoiding unnecessary memory copy operations. To avoid fast worn-out of certain NVMs, we also propose Heap-Wear, a wear leveling algorithm that more evenly distributes writes in NVM. Evaluation results based on the Google Nexus 5 smartphone show that our solution can effectively enhance smartphone performance and give better wear-leveling of NVM.", "title": "" }, { "docid": "58042f8c83e5cc4aa41e136bb4e0dc1f", "text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.", "title": "" }, { "docid": "9e3c4e32862b9b22ba9ec6968584f2ca", "text": "A smartphone’s display is one of its most energy consuming components. Modern smartphones use OLED displays that consume more energy when displaying light colors as op- posed to dark colors. This is problematic as many popular mobile web applications use large light colored backgrounds. To address this problem we developed an approach for auto- matically rewriting web applications so that they generate more energy efficient web pages. Our approach is based on program analysis of the structure of the web application im- plementation. In the evaluation of our approach we show that it can achieve a 40% reduction in display power con- sumption. A user study indicates that the transformed web pages are acceptable to users with over 60% choosing to use the transformed pages for normal usage.", "title": "" }, { "docid": "c3f1a534afe9f5c48aac88812a51ab09", "text": "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158% in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.", "title": "" }, { "docid": "ed9995e44ec14e26c0e8e8ee09a10d7c", "text": "Information systems play a significant role in assisting and improving a university's operational work performance. The quality of IT service is also needed to provide an information system that matches with a university's needs. As a result, evaluations need to be conducted towards IT services in fulfilling the needs and providing satisfaction for information systems users. The purpose of this paper was to conduct a synthesis of the service work performance provided by the IT Division towards information systems users by using systematic literature review. The methodology used in this research was a literature study related with a COBIT and ITIL framework, as well as finding the interrelatedness between service managements toward an increase of IT work performance.", "title": "" }, { "docid": "d02af961d8780a06ae0162647603f8bb", "text": "We contribute an empirically derived noise model for the Kinect sensor. We systematically measure both lateral and axial noise distributions, as a function of both distance and angle of the Kinect to an observed surface. The derived noise model can be used to filter Kinect depth maps for a variety of applications. Our second contribution applies our derived noise model to the KinectFusion system to extend filtering, volumetric fusion, and pose estimation within the pipeline. Qualitative results show our method allows reconstruction of finer details and the ability to reconstruct smaller objects and thinner surfaces. Quantitative results also show our method improves pose estimation accuracy.", "title": "" }, { "docid": "ba0726778e194159d916c70f5f4cedc9", "text": "We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.", "title": "" }, { "docid": "75bca61c2ca38e73ba43cca6244c357e", "text": "This paper presents our latest investigation on Densely Connected Convolutional Networks (DenseNets) for acoustic modelling (AM) in automatic speech recognition. DenseNets are very deep, compact convolutional neural networks, which have demonstrated incredible improvements over the state-of-the-art results on several data sets in computer vision. Our experimental results show that DenseNet can be used for AM significantly outperforming other neuralbased models such as DNNs, CNNs, VGGs. Furthermore, results on Wall Street Journal revealed that with only a half of the training data DenseNet was able to outperform other models trained with the full data set by a large margin.", "title": "" } ]
scidocsrr
0058e1037f755c49ed8a6f53fd3b86d9
Social power and information technology implementation: a contentious framing lens
[ { "docid": "3d0dddc16ae56d6952dd1026476fcbcc", "text": "We introduce a collective action model of institutional innovation. This model, based on converging perspectives from the technology innovation management and social movements literature, views institutional change as a dialectical process in which partisan actors espousing conflicting views confront each other and engage in political behaviors to create and change institutions. The model represents an important complement to existing models of institutional change. We discuss how these models together account for various stages and cycles of institutional change.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "784dc9c78e6552e4df8bfd9a7796d847", "text": "For image generation, deep neural networks are trained to extract high-level features on natural images and to reconstruct the images from the features. However it is difficult to learn to generate images containing enormous contents. To overcome this difficulty, a network with an attention mechanism has been proposed. It is trained to attend to parts of the image and to generate images step by step. This enables the network to deal with the details of a part of the image and the rough structure of the entire image. The attention mechanism is implemented by recurrent neural networks. Additionally, the Generative Adversarial Networks (GANs) approach has been proposed to generate more realistic images. In this study, we present image generation where leverages effectiveness of attention mechanism and the GANs approach. We show our method enables the iterative construction of images and more realistic image generation than standard GANs and the attention mechanism of DRAW.", "title": "" }, { "docid": "80fa6e0debf1ee93a9a819004d56b107", "text": "In recent years, a lot of mobile payment service providers entered the market and often left it without any success. They have often disregarded the high complexity of the ecosystem and of the particularities of mobile communication techniques. The contribution of this paper is to understand the determinants of customer acceptance of mobile payment especially the influence of payment scenarios. For that purpose, the Technology Acceptance Model (TAM) was extended by new constructs measuring the m-payment particularities like expressiveness and the applicability in different payment scenarios.", "title": "" }, { "docid": "d70ea405a182c4de3f50858599f84ad8", "text": "Oral lichen planus (OLP) has a prevalence of approximately 1%. The etiopathogenesis is poorly understood. The annual malignant transformation is less than 0.5%. There are no effective means to either predict or to prevent such event. Oral lesions may occur that to some extent look like lichen planus but lacking the characteristic features of OLP, or that are indistinguishable from OLP clinically but having a distinct cause, e.g. amalgam restoration associated. Such lesions are referred to as oral lichenoid lesions (OLLs). The management of OLP and the various OLLs may be different. Therefore, accurate diagnosis should be aimed at.", "title": "" }, { "docid": "2f8a74054d456d1136f0a36303b722bc", "text": "The swarm intelligence paradigm has proven to have very interesting properties such as robustness, flexibility and ability to solve complex problems exploiting parallelism and self-organization. Several robotics implementations of this paradigm confirm that these properties can be exploited for the control of a population of physically independent mobile robots. The work presented here introduces a new robotic concept called swarm-bot in which the collective interaction exploited by the swarm intelligence mechanism goes beyond the control layer and is extended to the physical level. This implies the addition of new mechanical functionalities on the single robot, together with new electronics and software to manage it. These new functionalities, even if not directly related to mobility and navigation, allow to address complex mobile robotics problems, such as extreme all-terrain exploration. The work shows also how this new concept is investigated using a simulation tool (swarmbot3d) specifically developed for quickly designing and evaluating new control algorithms. Experimental work shows how the simulated detailed representation of one s-bot has been calibrated to match the behaviour of the real robot.", "title": "" }, { "docid": "e7aae88d2b70cd780d431b847524ea63", "text": "The telecommunications industry, like many others, is experiencing a watershed. No longer can customers pursue technological advances just for technology's sake. Technology must support real, measurable, and innovative goals of the enterprise. The technologies and terms that comprise every major provider's portfolio are starting to look and sound alike. New product offerings appear almost identical to existing products in the same market. The terms VPN, MPLS, convergence, the ubiquitous \" IP, \" service level agreements (SLA), single points of contact, managed network services, and global footprints are important in the telecommunications market, but we have heard them all before. The competitive differentiation that service providers desperately seek will not occur on this homogenous slate of technology and service offerings. Only when service providers truly understand what is happening from the customer's perspective will real competitive differentiation take place. Providers must realize that they do not drive the networking and telecom environment; the customers' strategic and tactical objectives drive it. If service providers wish to position at higher levels in the corporation, they must change the way they communicate. Such communication should not only show an understanding of the enterprise applications themselves but also an understanding of how the applications relate to the service providers' product set. This paper will outline three (of the many) enterprise applications and business drivers service providers can use to differentiate themselves. We will examine the concepts of data warehousing and data mining for the purpose of effective enterprise resource planning (ERP), customer relationship management (CRM), and supply chain management (SCM). We will define the major aspects of each, examine the drivers and impacts of each, and consider how each relates to the service providers' product sets.", "title": "" }, { "docid": "8d6a6dd157de2a3a842977e62687a3db", "text": "Recent advantages in the anatomical understanding of the face have turned the focus toward the subcutaneous and deep facial fat compartments. During facial aging, these fat-filled compartments undergo substantial changes along with other structures in the face. Soft tissue filler and fat grafting are valid methods to fight the signs of facial aging, but little is known about their precise effect on the facial fat. This narrative review summarizes the current knowledge about the facial fat compartments in terms of anatomical location, histologic appearance, immune-histochemical characteristics, cellular interactions, and therapeutic options. Three different types of facial adipose tissue can be identified, which are located either superficially (dermal white adipose tissue) or deep (subcutaneous white adipose tissue): fibrous (perioral locations), structural (major parts of the midface), and deposit (buccal fat pad and deep temporal fat pad). These various fat types differ in the size of the adipocytes and the collagenous composition of their extracellular matrix and thus in their mechanical properties. Minimal invasive (e.g., soft tissue fillers or fat grafting) and surgical interventions aiming to restore the youthful face have to account for the different fat properties in various facial areas. However, little is known about the macro- and microscopic characteristics of the facial fat tissue in different compartments and future studies are needed to reveal new insights to better understand the process of aging and how to fight its signs best.", "title": "" }, { "docid": "ae8f5c568b2fdbb2dbef39ac277ddb24", "text": "Knowledge graph construction consists of two tasks: extracting information from external resources (knowledge population) and inferring missing information through a statistical analysis on the extracted information (knowledge completion). In many cases, insufficient external resources in the knowledge population hinder the subsequent statistical inference. The gap between these two processes can be reduced by an incremental population approach. We propose a new probabilistic knowledge graph factorisation method that benefits from the path structure of existing knowledge (e.g. syllogism) and enables a common modelling approach to be used for both incremental population and knowledge completion tasks. More specifically, the probabilistic formulation allows us to develop an incremental population algorithm that trades off exploitation-exploration. Experiments on three benchmark datasets show that the balanced exploitation-exploration helps the incremental population, and the additional path structure helps to predict missing information in knowledge completion.", "title": "" }, { "docid": "1b2d34a38f026b5e24d39cb68c8235ee", "text": "This book offers a comprehensive introduction to workflow management, the management of business processes with information technology. By defining, analyzing, and redesigning an organization’s resources and operations, workflow management systems ensure that the right information reaches the right person or computer application at the right time. The book provides a basic overview of workflow terminology and organization, as well as detailed coverage of workflow modeling with Petri nets. Because Petri nets make definitions easier to understand for nonexperts, they facilitate communication between designers and users. The book includes a chapter of case studies, review exercises, and a glossary.", "title": "" }, { "docid": "19c8893f9e27e48c9d31b759735936ec", "text": "Advanced driver assistance systems (ADAS) can be significantly improved with effective driver action prediction (DAP). Predicting driver actions early and accurately can help mitigate the effects of potentially unsafe driving behaviors and avoid possible accidents. In this paper, we formulate driver action prediction as a timeseries anomaly prediction problem. While the anomaly (driver actions of interest) detection might be trivial in this context, finding patterns that consistently precede an anomaly requires searching for or extracting features across multi-modal sensory inputs. We present such a driver action prediction system, including a real-time data acquisition, processing and learning framework for predicting future or impending driver action. The proposed system incorporates camera-based knowledge of the driving environment and the driver themselves, in addition to traditional vehicle dynamics. It then uses a deep bidirectional recurrent neural network (DBRNN) to learn the correlation between sensory inputs and impending driver behavior achieving accurate and high horizon action prediction. The proposed system performs better than other existing systems on driver action prediction tasks and can accurately predict key driver actions including acceleration, braking, lane change and turning at durations of 5sec before the action is executed by the driver. Keywords— timeseries modeling, driving assistant system, driver action prediction, driver intent estimation, deep recurrent neural network", "title": "" }, { "docid": "aa234355d0b0493e1d8c7a04e7020781", "text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.", "title": "" }, { "docid": "6b6b985af1ac016745b710e848e53ad6", "text": "Much artificial intelligence research is based on the construction of large impressive-looking programs, the theoretical content of which may not always be clearly stated. This is unproductive from the point of view of building a stable base for further research. We illustrate this problem by referring to Lenat's AM program, in which the techniques employed are somewhat obscure in spite of the impressive performance.", "title": "" }, { "docid": "c8ca57db545f2d1f70f3640651bb3e79", "text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.", "title": "" }, { "docid": "6f942f8ead4684f4943d1c82ea140b9a", "text": "This paper considers the problem of approximate nearest neighbor search in the compressed domain. We introduce polysemous codes, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance. Their design is inspired by algorithms introduced in the 90’s to construct channel-optimized vector quantizers. At search time, this dual interpretation accelerates the search. Most of the indexed vectors are filtered out with Hamming distance, letting only a fraction of the vectors to be ranked with an asymmetric distance estimator. The method is complementary with a coarse partitioning of the feature space such as the inverted multi-index. This is shown by our experiments performed on several public benchmarks such as the BIGANN dataset comprising one billion vectors, for which we report state-of-the-art results for query times below 0.3 millisecond per core. Last but not least, our approach allows the approximate computation of the k-NN graph associated with the Yahoo Flickr Creative Commons 100M, described by CNN image descriptors, in less than 8 hours on a single machine.", "title": "" }, { "docid": "c78c9181f811813481b7b2c7b5476ffb", "text": "Quantitative modeling of human brain activity based on language representations has been actively studied in systems neuroscience. However, previous studies examined word-level representation, and little is known about whether we could recover structured sentences from brain activity. This study attempts to generate natural language descriptions of semantic contents from human brain activity evoked by visual stimuli. To effectively use a small amount of available brain activity data, our proposed method employs a pre-trained image-captioning network model using a deep learning framework. To apply brain activity to the image-captioning network, we train regression models that learn the relationship between brain activity and deep-layer image features. The results demonstrate that the proposed model can decode brain activity and generate descriptions using natural language sentences. We also conducted several experiments with data from different subsets of brain regions known to process visual stimuli. The results suggest that semantic information for sentence generations is widespread across the entire cortex.", "title": "" }, { "docid": "6f4171569bb00bc70291fd2cd7b704e8", "text": "This paper aims at understanding the role of multi-scale information in the estimation of depth from monocular images. More precisely, the paper investigates four different deep CNN architectures, designed to explicitly make use of multi-scale features along the network, and compare them to a state-of-the-art single-scale approach. The paper also shows that involving multi-scale features in depth estimation not only improves the performance in terms of accuracy, but also gives qualitatively better depth maps. Experiments are done on the widely used NYU Depth dataset, on which the proposed method achieves state-of-the-art performance.", "title": "" }, { "docid": "831ea386dcb15a6967196b90cf3b6516", "text": "Advanced metering infrastructure (AMI) is an imperative component of the smart grid, as it is responsible for collecting, measuring, analyzing energy usage data, and transmitting these data to the data concentrator and then to a central system in the utility side. Therefore, the security of AMI is one of the most demanding issues in the smart grid implementation. In this paper, we propose an intrusion detection system (IDS) architecture for AMI which will act as a complimentary with other security measures. This IDS architecture consists of three local IDSs placed in smart meters, data concentrators, and central system (AMI headend). For detecting anomaly, we use data stream mining approach on the public KDD CUP 1999 data set for analysis the requirement of the three components in AMI. From our result and analysis, it shows stream data mining technique shows promising potential for solving security issues in AMI.", "title": "" }, { "docid": "33f7b3df9b4cb973f25ed1bef1dbc955", "text": "We propose a mixture-of-experts approach for unsupervised domain adaptation from multiple sources. The key idea is to explicitly capture the relationship between a target example and different source domains. This relationship, expressed by a point-to-set metric, determines how to combine predictors trained on various domains. The metric is learned in an unsupervised fashion using metatraining. Experimental results on sentiment analysis and part-of-speech tagging demonstrate that our approach consistently outperforms multiple baselines and can robustly handle negative transfer.1", "title": "" }, { "docid": "2a9b010b062ac7f0873cab93b6096d0b", "text": "Hardly any subjects enjoy greater - public or private - interest than the art of flirtation and seduction. However, interpersonal approach behavior not only paves the way for sexual interaction and reproduction, but it simultaneously integrates non-sexual psychobiological and cultural standards regarding consensus and social norms. In the present paper, we use script theory, a concept that extends across psychological and cultural science, to assess behavioral options during interpersonal approaches. Specifically, we argue that approaches follow scripted event sequences that entail ambivalence as an essential communicative element. On the one hand, ambivalence may facilitate interpersonal approaches by maintaining and provoking situational uncertainty, so that the outcome of an action - even after several approaches and dates - remains ambiguous. On the other hand, ambivalence may increase the risk for sexual aggression or abuse, depending on the individual's abilities, the circumstances, and the intentions of the interacting partners. Recognizing latent sequences of sexually aggressive behavior, in terms of their rigid structure and behavioral options, may thus enable individuals to use resources efficiently, avoid danger, and extricate themselves from assault situations. We conclude that interdisciplinary script knowledge about ambivalence as a core component of the seduction script may be helpful for counteracting subtly aggressive intentions and preventing sexual abuse. We discuss this with regard to the nature-nurture debate as well as phylogenetic and ontogenetic aspects of interpersonal approach behavior and its medial implementation.", "title": "" }, { "docid": "86bd51407b0774d07e9f8cdea04c8e1d", "text": "A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets.", "title": "" } ]
scidocsrr
eae93a0b0861ece594e7698142fbd122
Dex: Incremental Learning for Complex Environments in Deep Reinforcement Learning
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" } ]
[ { "docid": "9c12404173c8395d39168d16b0cb66e6", "text": "Ranking is a key problem in many information retrieval (IR) applications, such as document retrieval and collaborative filtering. In this paper, we address the issue of learning to rank in document retrieval. Learning-based methods, such as RankNet, RankSVM, and RankBoost, try to create ranking functions automatically by using some training data. Recently, several learning to rank methods have been proposed to directly optimize the performance of IR applications in terms of various evaluation measures. They undoubtedly provide statistically significant improvements over conventional methods; however, from the viewpoint of decision-making, most of them do not minimize the Bayes risk of the IR system. In an attempt to fill this research gap, we propose a novel framework that directly optimizes the Bayes risk related to the ranking accuracy in terms of the IR evaluation measures. The results of experiments on the LETOR collections demonstrate that the framework outperforms several existing methods in most cases.", "title": "" }, { "docid": "7834cad6190a019c3b0086a3f0231182", "text": "In modern train control systems, a moving train retrieves its location information through passive transponders called balises, which are placed on the sleepers of the track at regular intervals. When the train-borne antenna energizes them using tele-powering signals, balises backscatter preprogrammed telegrams, which carry information about the train's current location. Since the telegrams are static in the existing implementations, the uplink signals from the balises could be recorded by an adversary and then replayed at a different location of the track, leading to what is well-known as the replay attack. Such an attack, while the legitimate balise is still functional, introduces ambiguity to the train about its location, can impact the physical operations of the trains. For balise-to-train communication, we propose a new communication framework referred to as cryptographic random fountains (CRF), where each balise, instead of transmitting telegrams with fixed information, transmits telegrams containing random signals. A salient feature of CRF is the use of challenge-response based interaction between the train and the balise for communication integrity. We present a thorough security analysis of CRF to showcase its ability to mitigate sophisticated replay attacks. Finally, we also discuss the implementation aspects of our framework.", "title": "" }, { "docid": "d8a194a88ccf20b8160b75d930969c85", "text": "We describe the design and hardware implementation of our walking and manipulation controllers that are based on a cascade of online optimizations. A virtual force acting at the robot's center of mass (CoM) is estimated and used to compensated for modeling errors of the CoM and unplanned external forces. The proposed controllers have been implemented on the Atlas robot, a full size humanoid robot built by Boston Dynamics, and used in the DARPA Robotics Challenge Finals, which consisted of a wide variety of locomotion and manipulation tasks.", "title": "" }, { "docid": "de8567d7af1061b8fe6220a064eb2b69", "text": "This paper presents a five stage current starved Voltage Controlled Oscillator (CMOS VCO) for low power Phase Lock Loop (PLL). The implemented design used a standard 0.18μm CMOS Technology with simulation CAD software mentor graphics tool and uses two models of P-channel and N-channel Mosfets Model I and II. The Model I and II of P-channel MOSFET have two parameters, wide and length of 10, 0.5 and 8, 0.2 respectively, while Model I and II of N-channel MOSFET with similar parameters have wide and length of 10, 0.5 and 8, 0.2 respectively. The experimental results presented suggests that the design exhibits VCO frequency ranging from 21MHz to 315.34 MHz at low power. The designed circuit is simulated using 180nm technology, and the results shows that the voltage drawn is around 5V supplied at VDD, and the produced current and voltage approximate the power consumption is 105.3mW. The proposed design is suitable for PLL as a frequency multiplier based on its features as presented. Keywords— VCO, CMOS Low Power, PLL.", "title": "" }, { "docid": "2b3f7ee7e28089c9d66110f571f8b0a3", "text": "Autopilot systems are typically composed of an “inner loop” providing stability and control, whereas an “outer loop” is responsible for mission-level objectives, such as way-point navigation. Autopilot systems for unmanned aerial vehicles are predominately implemented using Proportional-Integral-Derivative (PID) control systems, which have demonstrated exceptional performance in stable environments. However, more sophisticated control is required to operate in unpredictable and harsh environments. Intelligent flight control systems is an active area of research addressing limitations of PID control most recently through the use of reinforcement learning (RL), which has had success in other applications, such as robotics. Yet previous work has focused primarily on using RL at the mission-level controller. In this work, we investigate the performance and accuracy of the inner control loop providing attitude control when using intelligent flight control systems trained with state-of-the-art RL algorithms—Deep Deterministic Policy Gradient, Trust Region Policy Optimization, and Proximal Policy Optimization. To investigate these unknowns, we first developed an open source high-fidelity simulation environment to train a flight controller attitude control of a quadrotor through RL. We then used our environment to compare their performance to that of a PID controller to identify if using RL is appropriate in high-precision, time-critical flight control.", "title": "" }, { "docid": "69ae64969a3bfe518cd003d97e0ee009", "text": "In this research we set out to discover why and how people seek anonymity in their online interactions. Our goal is to inform policy and the design of future Internet architecture and applications. We interviewed 44 people from America, Asia, Europe, and Africa who had sought anonymity and asked them about their experiences. A key finding of our research is the very large variation in interviewees' past experiences and life situations leading them to seek anonymity, and how they tried to achieve it. Our results suggest implications for the design of online communities, challenges for policy, and ways to improve anonymity tools and educate users about the different routes and threats to anonymity on the Internet.", "title": "" }, { "docid": "e177c04d8eb729046d368965dbcedd4c", "text": "This study investigated biased message processing of political satire in The Colbert Report and the influence of political ideology on perceptions of Stephen Colbert. Results indicate that political ideology influences biased processing of ambiguous political messages and source in late-night comedy. Using data from an experiment (N = 332), we found that individual-level political ideology significantly predicted perceptions of Colbert’s political ideology. Additionally, there was no significant difference between the groups in thinking Colbert was funny, but conservatives were more likely to report that Colbert only pretends to be joking and genuinely meant what he said while liberals were more likely to report that Colbert used satire and was not serious when offering political statements. Conservatism also significantly predicted perceptions that Colbert disliked liberalism. Finally, a post hoc analysis revealed that perceptions of Colbert’s political opinions fully mediated the relationship between political ideology and individual-level opinion.", "title": "" }, { "docid": "4f52077553ebd94ed6ce9ff2120dfe9d", "text": "A new type of deep neural networks (DNNs) is presented in this paper. Traditional DNNs use the multinomial logistic regression (softmax activation) at the top layer for classification. The new DNN instead uses a support vector machine (SVM) at the top layer. Two training algorithms are proposed at the frame and sequence-level to learn parameters of SVM and DNN in the maximum-margin criteria. In the frame-level training, the new model is shown to be related to the multiclass SVM with DNN features; In the sequence-level training, it is related to the structured SVM with DNN features and HMM state transition features. Its decoding process is similar to the DNN-HMM hybrid system but with frame-level posterior probabilities replaced by scores from the SVM. We term the new model deep neural support vector machine (DNSVM). We have verified its effectiveness on the TIMIT task for continuous speech recognition.", "title": "" }, { "docid": "6006d2a032b60c93e525a8a28828cc7e", "text": "Recent advances in genome engineering indicate that innovative crops developed by targeted genome modification (TGM) using site-specific nucleases (SSNs) have the potential to avoid the regulatory issues raised by genetically modified organisms. These powerful SSNs tools, comprising zinc-finger nucleases, transcription activator-like effector nucleases, and clustered regulatory interspaced short palindromic repeats/CRISPR-associated systems, enable precise genome engineering by introducing DNA double-strand breaks that subsequently trigger DNA repair pathways involving either non-homologous end-joining or homologous recombination. Here, we review developments in genome-editing tools, summarize their applications in crop organisms, and discuss future prospects. We also highlight the ability of these tools to create non-transgenic TGM plants for next-generation crop breeding.", "title": "" }, { "docid": "9b017cce620990f8aadb4fef9dcb015a", "text": "A recent line of work has uncovered a new form of data poisoning: so-called backdoor attacks. These attacks are particularly dangerous because they do not affect a network’s behavior on typical, benign data. Rather, the network only deviates from its expected output when triggered by a perturbation planted by an adversary. In this paper, we identify a new property of all known backdoor attacks, which we call spectral signatures. This property allows us to utilize tools from robust statistics to thwart the attacks. We demonstrate the efficacy of these signatures in detecting and removing poisoned examples on real image sets and state of the art neural network architectures. We believe that understanding spectral signatures is a crucial first step towards designing ML systems secure against such backdoor attacks.", "title": "" }, { "docid": "b7729008700bd7623db8a967826d6e23", "text": "This paper describes the modeling of jitter in clock-and-data recovery (CDR) systems using an event-driven model that accurately includes the effects of power-supply noise, the finite bandwidth (aperture window) in the phase detector's front-end sampler, and intersymbol interference in the system's channel. These continuous-time jitter sources are captured in the model through their discrete-time influence on sample based phase detectors. Modeling parameters for these disturbances are directly extracted from the circuit implementation. The event-driven model, implemented in Simulink, has a simulation accuracy within 12% of an Hspice simulation-but with a simulation speed that is 1800 times higher.", "title": "" }, { "docid": "f2d1f05292ddb0df8fa92fe1992852ab", "text": "In this paper, we study the design of omnidirectional mobile robots with Active-Caster RObotic drive with BAll Transmission (ACROBAT). ACROBAT system has been developed by the authors group which realizes mechanical coordination of wheel and steering motions for creating caster behaviors without computer calculations. A motion in the specific direction relative to a robot body is fully depends on the motion of a specific motor. This feature gives a robot designer to build an omnidirectional mobile robot propelled by active-casters with no redundant actuation with a simple control. A controller of the robot becomes as simple as that for omni-wheeled robotic bases. Namely 3DOF of the omnidirectional robot is controlled by three motors using a simple and constant kinematics. ACROBAT includes a unique dual-ball transmission to transmit traction power to rotate and orient a drive wheel with distributing velocity components to wheel and steering axes in an appropriate ratio. Therefore a sensor for measuring a wheel orientation and calculations for velocity distributions are totally removed from a conventional control system. To build an omnidirectional vehicle by ACROBAT, the significant feature is some multiple drive shafts can be driven by a common motor which realizes non-redundant actuation of the robotic platform. A kinematic model of the proposed robot with ACROBAT is analyzed and a mechanical condition for realizing a non-redundant actuation is derived. Based on the kinematic model and the mechanical condition, computer simulations of the mechanism are performed. A prototype two-wheeled robot with two ACROBATs is designed and built to verify the availability of the proposed system. In the experiments, the prototype robot shows successful omnidirectional motions with a simple and constant kinematics based control.", "title": "" }, { "docid": "6c75f7466323ed15f3b0c625a1ee90e3", "text": "During the last years significant progress was achieved in unraveling molecular characteristics of the thylakoid membrane of different diatoms. With the present review it is intended to summarize the current knowledge about the structural and functional changes within the thylakoid membrane of diatoms acclimated to different light conditions. This aspect is addressed on the level of the organization and regulation of light-harvesting proteins, the dissipation of excessively absorbed light energy by the process of non-photochemical quenching, and the lipid composition of diatom thylakoid membranes. Finally, a working hypothesis of the domain formation of the diatom thylakoid membrane is presented to highlight the most prominent differences of heterokontic thylakoids in comparison to vascular plants and green algae during the acclimation to low and high light conditions.", "title": "" }, { "docid": "5793cf03753f498a649c417e410c325e", "text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.", "title": "" }, { "docid": "326246fd723fba699a9ae2219082b522", "text": "Metadata in the Haystack environment is expressed according to the Resource Description Framework (RDF) (RDF, 1998). In essence, RDF is a format for describing semantic networks or directed graphs with labeled edges. Nodes and edges are named with uniform resource identifiers (URIs), making them globally unique and thus useful in a distributed environment. Node URIs are used to represent objects, such as web pages, people, agents, and documents. A directed edge connecting two nodes expresses a relationship, given by the URI of the edge.", "title": "" }, { "docid": "4d6e9bc0a8c55e65d070d1776e781173", "text": "As electronic device feature sizes scale-down, the power consumed due to onchip communications as compared to computations will increase dramatically; likewise, the available bandwidth per computational operation will continue to decrease. Integrated photonics can offer savings in power and potential increase in bandwidth for onchip networks. Classical diffraction-limited photonics currently utilized in photonic integrated circuits (PIC) is characterized by bulky and inefficient devices compared to their electronic counterparts due to weak light matter interactions (LMI). Performance critical for the PIC is electro-optic modulators (EOM), whose performances depend inherently on enhancing LMIs. Current EOMs based on diffraction-limited optical modes often deploy ring resonators and are consequently bulky, photon-lifetime modulation limited, and power inefficient due to large electrical...", "title": "" }, { "docid": "8b0719746cb7aab54bae2464a11e19c2", "text": "Information and intelligence are two vital columns on which development of humankind rise and knowledge has significant impact on operating of society. Student assessment is a crucial part of teaching and is done through the process of examinations and preparation of exam question papers has consistently been a matter of interest. Present-day technologies assist the teacher to stock the questions in a computer databases but the problem which emerges is how the present day technologies would also assist the teachers to automatically create the variety sets of questions from every now and then without worry about replication and duplication from the previous exam while the question bank keeps growing, so a non-automatic path for conniving a exam paper would not be able to serve to this need so in this paper we introduce an automated way which would permit the operation of conniving exam paper to be further well organized and productive and it would also aid in developing a database of questions which could be further classified for blending of exam question paper, currently there is no systematic procedure to fortify quality of exam question paper. Hence there appears a requirement to have a system which will automatically create the question paper from teacher entered description within few", "title": "" }, { "docid": "32fd519ce3e4995ee994ed168429e016", "text": "Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.", "title": "" }, { "docid": "d9c4bdd95507ef497db65fc80d3508c5", "text": "3D content creation is referred to as one of the most fundamental tasks of computer graphics. And many 3D modeling algorithms from 2D images or curves have been developed over the past several decades. Designers are allowed to align some conceptual images or sketch some suggestive curves, from front, side, and top views, and then use them as references in constructing a 3D model automatically or manually. However, to the best of our knowledge, no studies have investigated on 3D human body reconstruction in a similar manner. In this paper, we propose a deep learning based reconstruction of 3D human body shape from 2D orthographic views. A novel CNN-based regression network, with two branches corresponding to frontal and lateral views respectively, is designed for estimating 3D human body shape from 2D mask images. We train our networks separately to decouple the feature descriptors which encode the body parameters from different views, and fuse them to estimate an accurate human body shape. In addition, to overcome the shortage of training data required for this purpose, we propose some significantly data augmentation schemes for 3D human body shapes, which can be used to promote further research on this topic. Extensive experimental results demonstrate that visually realistic and accurate reconstructions can be achieved effectively using our algorithm. Requiring only binary mask images, our method can help users create their own digital avatars quickly, and also make it easy to create digital human body for 3D game, virtual reality, online fashion shopping.", "title": "" } ]
scidocsrr
325c72af2a7c7e8fbd06f755043bd5ea
Characterizing Cloud Federation in IoT
[ { "docid": "74497fc5d50ad6047d428714bfbba6b8", "text": "Newer models for interacting with wireless sensors such as Internet of Things and Sensor Cloud aim to overcome restricted resources and efficiency. The Missouri S&T (science and technology) sensor cloud enables different networks, spread in a huge geographical area, to connect together and be employed simultaneously by multiple users on demand. Virtual sensors, which are at the core of this sensor cloud architecture, assist in creating a multiuser environment on top of resource-constrained physical wireless sensors and can help in supporting multiple applications.", "title": "" } ]
[ { "docid": "316aa66508daedc1b729283d6212bdb0", "text": "The purpose of this study is to examine the physiological effects of Shinrin-yoku (taking in the atmosphere of the forest). The subjects were 12 male students (22.8+/-1.4 yr). On the first day of the experiments, one group of 6 subjects was sent to a forest area, and the other group of 6 subjects was sent to a city area. On the second day, each group was sent to the opposite area for a cross check. In the forenoon, the subjects were asked to walk around their given area for 20 minutes. In the afternoon, they were asked to sit on chairs and watch the landscapes of their given area for 20 minutes. Cerebral activity in the prefrontal area and salivary cortisol were measured as physiological indices in the morning at the place of accommodation, before and after walking in the forest or city areas during the forenoon, and before and after watching the landscapes in the afternoon in the forest and city areas, and in the evening at the place of accommodation. The results indicated that cerebral activity in the prefrontal area of the forest area group was significantly lower than that of the group in the city area after walking; the concentration of salivary cortisol in the forest area group was significantly lower than that of the group in the city area before and after watching each landscape. The results of the physiological measurements show that Shinrin-yoku can effectively relax both people's body and spirit.", "title": "" }, { "docid": "7fbd42453885a86ff0d4cdd68fad2b7e", "text": "Artificial Neural Networks can be used to predict future returns of stocks in order to take financial decisions . Should one build a separate network for each stock or share the same network for all the stocks? In this paper we also explore other alternatives, in which some layers are shared and others are not shared. When the prediction of future returns for different stocks are viewed as different tasks, sharing some parameters across stocks is a form of multi-task learning. In a series of experiments with Canadian stocks, we obtain yearly returns that are more than 14% above various benchmarks.", "title": "" }, { "docid": "0b643a08e19a8c36e510623179a12ae3", "text": "—In this work we propose a new deep learning tool – deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion – one layer at a time. This requires solving a simple (shallow) dictionary learning problem; the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state-of-the-art supervised dictionary learning tools like discriminative K-SVD and label consistent K-SVD. Our method yields better results than all.", "title": "" }, { "docid": "ddb380c2c7b0377a50a919784994e227", "text": "The colors of deep-sea species are generally assumed to be cryptic, but it is not known how cryptic they are and under what conditions. This study measured the color of approximately 70 deep-sea species, both pelagic and benthic, and compared the results with two sets of predictions: 1) optimal crypsis under ambient light, 2) optimal crypsis when viewed by bioluminescent \"searchlights.\" The reflectances of the pelagic species at the blue-green wavelengths important for deep-sea vision were far lower than the predicted reflectances for crypsis under ambient light and closer to the zero reflectance prediction for crypsis under searchlights. This suggests that bioluminescence is more important than ambient light for the visual detection of pelagic species at mesopelagic depths. The reflectances of the benthic species were highly variable and a relatively poor match to the substrates on which they were found. However, estimates of the contrast sensitivity of deep-sea visual systems suggest that even approximate matches may be sufficient for crypsis in visually complex benthic habitats. Body coloration was generally uniform, but many crabs had striking patterns that may serve to disrupt the outlines of their bodies.", "title": "" }, { "docid": "206dc1a4a27b603360888d414e0b5cf6", "text": "Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning-termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.", "title": "" }, { "docid": "4a5a5958eaf3a011a04d4afc1155e521", "text": "1 Department of Geography, University of Kentucky, Lexington, Kentucky, United States of America, 2 Microsoft Research, New York, New York, United States of America, 3 Data & Society, New York, New York, United States of America, 4 Information Law Institute, New York University, New York, New York, United States of America, 5 Department of Media and Communications, London School of Economics, London, United Kingdom, 6 Harvard-Smithsonian Center for Astrophysics, Harvard University, Cambridge, Massachusetts, United States of America, 7 Center for Engineering Ethics and Society, National Academy of Engineering, Washington, DC, United States of America, 8 Institute for Health Aging, University of California-San Francisco, San Francisco, California, United States of America, 9 Ethical Resolve, Santa Cruz, California, United States of America, 10 Department of Computer Science, Princeton University, Princeton, New Jersey, United States of America, 11 Department of Sociology, Columbia University, New York, New York, United States of America, 12 Carey School of Law, University of Maryland, Baltimore, Maryland, United States of America", "title": "" }, { "docid": "0de6bb78df01b70402007be2eecea72a", "text": "The study aims to compare the efficacy and safety of over-the-counter whitestrips with the American Dental Association (ADA)-recommended home-whitening using the 10 % carbamide peroxide gel. Randomized controlled trials (RCTs) comparing the clinical efficacy and safety of the whitestrips with the 10 % carbamide peroxide (10 % CP) gel applied on tray for tooth whitening in adults were searched at PubMed and Cochrane Central Register of Controlled Trials databases and selected up to October 2014. Efficacy of the whitening techniques was assessed through ∆E, ∆L, and ∆b parameters, while side effects were analyzed as dichotomous variables. Data was extracted independently by two reviewers. Metanalysis was performed using random- and fixed-effect models (RevMan 5.3). Eight studies were included in the metanalysis. The metanalysis revealed no significant difference between the intervention groups for tooth-whitening efficacy measured as ΔE (mean difference [MD]−0.53; 95 % CI [−1.72;0.66]; Z = 0.88; p = 0.38) and ΔL (MD−0.22; 95 % CI [−0.81;0.36]; z = 0.75; p = 0.45); reduction of yellowing was higher with the whitestrips (MD−0.47; 95 % CI [−0.89; −0.06]; Z = 2.25; p = 0.02). Tooth sensitivity (risk ratio [RR] 1.17; 95 % CI [0.81–1.69]; Z = 0.81; p = 0.42) and gingival sensitivity (RR 0.76; 95 % CI [0.53–1.10]; Z = 1.44; p = 0.15) were similar, regardless of the whitening method used. The observed gingival irritation was higher when the 10 % CP gel was applied on tray (RR 0.43; 95 % CI [0.20–0.93]; Z = 2.14; p = 0.03). The quality of evidence generated was rated very low for all outcomes. There is no sound evidence to support the use of the whitening strips in detriment of the ADA-recommended technique based on the 10 % carbamide peroxide gel applied on tray. To the moment, there is no sound evidence in dental literature to suggest that the ADA-recommended whitening technique based on 10 % carbamide peroxide gel could be substituted by the whitening strips. The existing studies, with their limitations, revealed similar tooth whitening and tooth and gingival sensitivity for both whitening techniques.", "title": "" }, { "docid": "c341a234ea76d603438f8589ca6ee2b1", "text": "In the last few years, in an attempt to further motivate students to learn a foreign language, there has been an increasing interest in task-based teaching techniques, which emphasize communication and the practical use of language, thus moving away from the repetitive grammar-translation methods. Within this approach, the significance of situating foreign language learners in scenarios where they can meaningfully learn has become a major priority for many educators. This approach is particularly relevant in the context of teaching foreign languages to young children, who need to be introduced to a new language by means of very concrete vocabulary, which is facilitated by the use of objects that they can handle and see. In this study, we investigate the benefits of using wearable and Internet-of-Things (IoT) technologies in streamlining the creation of such realistic task-based language learning scenarios. We show that the use of these technologies will prove beneficial by freeing the instructors of having to keep records of the tasks performed by each student during the class session. Instead, instructors can focus their efforts on creating a friendly environment and encouraging students to participate. Our study sets up a basis for showing the great benefits of using wearable and IoT technologies in streamlining 1) the creation of realistic scenarios in which young foreign language learners can feel comfortable engaging in chat and becoming better prepared for social interaction in a foreign language, and 2) the acquisition and processing of performance metrics.", "title": "" }, { "docid": "38570075c31812866646d47d25667a49", "text": "Mercator is a program that uses hop-limited probes—the same primitive used in traceroute—to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route ca p ble routers wherever possible to enhance the fidelity of the resulting ma p, and employs novel mechanisms for resolvingaliases(interfaces belonging to the same router). This paper describes the design of these heuri stics and our experiences with Mercator, and presents some preliminary a nalysis of the resulting Internet map.", "title": "" }, { "docid": "91d59b5e08c711e25d83785c198d9ae1", "text": "The increase in the wireless users has led to the spectrum shortage problem. Federal Communication Commission (FCC) showed that licensed spectrum bands are underutilized, specially TV bands. The IEEE 802.22 standard was proposed to exploit these white spaces in the (TV) frequency spectrum. Cognitive Radio allows unlicensed users to use licensed bands while safeguarding the priority of licensed users. Cognitive Radio is composed of two types of users, licensed users also known as Primary Users(PUs) and unlicensed users also known as Secondary Users(SUs).SUs use the resources when spectrum allocated to PU is vacant, as soon as PU become active, the SU has to leave the channel for PU. Hence the opportunistic access is provided by CR to SUs whenever the channel is vacant. Cognitive Users sense the spectrum continuously and share this sensing information to other SUs, during this spectrum sensing, the network is vulnerable to so many attacks. One of these attacks is Primary User Emulation Attack (PUEA), in which the malicious secondary users can mimic the characteristics of primary users thereby causing legitimate SUs to erroneously identify the attacker as a primary user, and to gain access to wireless channels. PUEA is of two types: Selfish and Malicious attacker. A selfish attacker aims in stealing Bandwidth form legitimate SUs for its own transmissions while malicious attacker mimic the characteristics of PU.", "title": "" }, { "docid": "9a79af1c226073cc129087695295a4e5", "text": "This paper presents an effective approach for resume information extraction to support automatic resume management and routing. A cascaded information extraction (IE) framework is designed. In the first pass, a resume is segmented into a consecutive blocks attached with labels indicating the information types. Then in the second pass, the detailed information, such as Name and Address, are identified in certain blocks (e.g. blocks labelled with Personal Information), instead of searching globally in the entire resume. The most appropriate model is selected through experiments for each IE task in different passes. The experimental results show that this cascaded hybrid model achieves better F-score than flat models that do not apply the hierarchical structure of resumes. It also shows that applying different IE models in different passes according to the contextual structure is effective.", "title": "" }, { "docid": "994fa5e298eaeeaa03009e97c46cb575", "text": "Three models of the relations of coping efficacy, coping, and psychological problems of children of divorce were investigated. A structural equation model using cross-sectional data of 356 nine- to twelve-year-old children of divorce yielded results that supported coping efficacy as a mediator of the relations between both active coping and avoiding coping and psychological problems. In a prospective longitudinal model with a subsample of 162 of these children, support was found for Time 2 coping efficacy as a mediator of the relations between Time 1 active coping and Time 2 internalizing of problems. Individual growth curve models over four waves also found support for coping efficacy as a mediator of the relations between active coping and psychological problems. No support was found for alternative models of coping as a mediator of the relations between efficacy and symptoms or for coping efficacy as a moderator of the relations between coping and symptoms.", "title": "" }, { "docid": "702f368c8ea8313e661a3b731cec3eba", "text": "This paper develops a new framework for explaining the dynamic aspects of business models in value webs. As companies move from research to roll-out and maturity three forces cause changes in business models. The technological forces are most important in the first phase, regulation in the second phase, and markets in the third. The forces cause change through influence on the technology, services, finances, and organizational network of the firm. As a result, partners in value webs will differ across these phases. A case study of NTT DoCoMo’s i-mode illustrates the framework.", "title": "" }, { "docid": "4dd84cdcb1bae5dcbaebafb5b234551e", "text": "In recent years, LPWAN technology, designed to realize low-power and long distance communication, has attracted much attention. Among several LPWAN technologies, Long Range (LoRa) is one of the most competitive physical layer protocol. Long Range Wide Area Network (LoRaWAN) is an upper layer protocol used with LoRa, and it provides several security functions including random number-based replay attack prevention system. According to recent studies, current replay attack prevention of LoRaWAN can mislead benign messages into replay attacks. To resolve this problem, several new replay attack prevention schemes have been proposed. However, existing schemes have limitations such as not being compatible with the existing packet structure or not considering an exceptional situation such as device reset. Therefore, in this paper, we propose a new LoRaWAN replay attack prevention scheme that resolves these problems. Our scheme follows the existing packet structure and is designed to cope with exceptional situations such as device reset. As a result of calculations, in our scheme, the probability that a normal message is mistaken for a replay attack is 60-89% lower than the current LoRaWAN. Real-world experiments also support these results.", "title": "" }, { "docid": "7e8b58bebacf28227942d603629571cf", "text": "The evaluation results of the Mobius transforms of some often-used waveforms are applied in the image transform. And the method of modulation and demodulation in Chen-Mobius communication systems, which is quite different from the traditional one, is newly used. To make such a processing, the Chen-Mobius inverse transformed functions act as the “modulation” waveforms, and the receiving end coherently “demodulated” by the often-used digital waveforms. Then the image is partitioned in segmentations that are “modulated” and “demodulated” in different waveform sets. Therefore, it's another new method of image cryptography. In additional, the image transform is simulated by the MATLAB software on the computer and the results are discussed in some details.", "title": "" }, { "docid": "f4999eaa4310b48864fc0bc80ed62980", "text": "In the software process, unresolved natural language (NL) ambiguities in the early requirements phases may cause problems in later stages of development. Although methods exist to detect domain-independent ambiguities, ambiguities are also influenced by the domain-specific background of the stakeholders involved in the requirements process. In this paper, we aim to estimate the degree of ambiguity of typical computer science words (e.g., system, database, interface) when used in different application domains. To this end, we apply a natural language processing (NLP) approach based on Wikipedia crawling and word embeddings, a novel technique to represent the meaning of words through compact numerical vectors. Our preliminary experiments, performed on five different domains, show promising results. The approach allows an estimate of the variation of meaning of the computer science words when used in different domains. Further validation of the method will indicate the words that need to be carefully defined in advance by the requirements analyst to avoid misunderstandings when editing documents and dealing with experts in the considered domains.", "title": "" }, { "docid": "792694fbea0e2e49a454ffd77620da47", "text": "Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical", "title": "" }, { "docid": "2e2960942966d92ac636fa0be2e9410e", "text": "Clustering is a powerful technique for large-scale topic discovery from text. It involves two phases: first, feature extraction maps each document or record to a point in high-dimensional space, then clustering algorithms automatically group the points into a hierarchy of clusters. We describe an unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase. We introduce a methodology for measuring the quality of a cluster hierarchy in terms of FMeasure, and present the results of experiments comparing different algorithms. The evaluation considers some feature selection parameters (tfidfand feature vector length) but focuses on the clustering algorithms, namely techniques from Scatter/Gather (buckshot, fractionation, and split/join) and kmeans. Our experiments suggest that continuous center adjustment contributes more to cluster quality than seed selection does. It follows that using a simpler seed selection algorithm gives a better time/quality tradeoff. We describe a refinement to center adjustment, “vector average damping,” that further improves cluster quality. We also compare the near-linear time algorithms to a group average greedy agglomerative clustering algorithm to demonstrate the time/quality tradeoff quantitatively.", "title": "" }, { "docid": "59f583df7d2aaad02a4e351bc7479cdf", "text": "Language is systematically structured at all levels of description, arguably setting it apart from all other instances of communication in nature. In this article, I survey work over the last 20 years that emphasises the contributions of individual learning, cultural transmission, and biological evolution to explaining the structural design features of language. These 3 complex adaptive systems exist in a network of interactions: individual learning biases shape the dynamics of cultural evolution; universal features of linguistic structure arise from this cultural process and form the ultimate linguistic phenotype; the nature of this phenotype affects the fitness landscape for the biological evolution of the language faculty; and in turn this determines individuals' learning bias. Using a combination of computational simulation, laboratory experiments, and comparison with real-world cases of language emergence, I show that linguistic structure emerges as a natural outcome of cultural evolution once certain minimal biological requirements are in place.", "title": "" }, { "docid": "aa429eaf0c3a60825b3f9f158095a66f", "text": "When deploying firewalls in an organization, it is essential to verify that the firewalls are configured properly. The problem of finding out what a given firewall configuration does occurs, for instance, when a new network administrator takes over, or a third party performs a technical security audit for the organization. While the problem can be approached via testing, non-intrusive techniques are often preferred. Existing tools for analyzing firewall configurations usually rely on hard-coded algorithms for analyzing access lists. In this paper we present a tool based on constraint logic programming (CLP) which allows the user to write higher level operations for, e.g., detecting common configuration mistakes. Our tool understands Cisco router access lists, and it is implemented using Eclipse, a constraint logic programming language. The problem of analyzing firewall configurations lends itself quite naturally to be solved by an expert system. We found it surprisingly easy to use logic statements to express knowledge on networking, firewalls, and common configuration mistakes, for instance. Using an existing generic inference engine allowed us to focus on defining the core concepts and relationships in the knowledge base.", "title": "" } ]
scidocsrr
1dd2e2ac1d44cdebfd066486320bb93a
Thematic Analysis and Visualization of Textual Corpus
[ { "docid": "b77ab33226f6d643aee49d63d3485d46", "text": "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "title": "" } ]
[ { "docid": "9c47d1896892c663987caa24d4a70037", "text": "Multi-pitch estimation of sources in music is an ongoing research area that has a wealth of applications in music information retrieval systems. This paper presents the systematic evaluations of over a dozen competing methods and algorithms for extracting the fundamental frequencies of pitched sound sources in polyphonic music. The evaluations were carried out as part of the Music Information Retrieval Evaluation eXchange (MIREX) over the course of two years, from 2007 to 2008. The generation of the dataset and its corresponding ground-truth, the methods by which systems can be evaluated, and the evaluation results of the different systems are presented and discussed.", "title": "" }, { "docid": "ea30c3baad2f7f74661e85c7155e6fab", "text": "Electrical stimulation of the spinal cord at C7D1 evoked triphasic descending spinal cord evoked potentials (DSCEP) from an oesophago-vertebral recording at D8D8 or D1OD1O. Ascending SCEPs (ASCEP) larger and similar in shape were also observed when the orientation of the stimulating and recording dipoles was reversed. Both SCEPs are in part generated by descending and ascending synchronous excitation of neuronal volume-conducted spinal cord dipoles.", "title": "" }, { "docid": "06ae56bc104dbcaa6c82c5b3d021d7fe", "text": "Open Innovation is a phenomenon that has become increasingly important for both practice and theory over the last few years. The reasons are to be found in shorter innovation cycles, industrial research and development’s escalating costs as well as in the dearth of resources. Subsequently, the open source phenomenon has attracted innovation researchers and practitioners. The recent era of open innovation started when practitioners realised that companies that wished to commercialise both their own ideas as well as other firms’ innovation should seek new ways to bring their in-house ideas to market. They need to deploy pathways outside their current businesses and should realise that the locus where knowledge is created does not necessarily always equal the locus of innovation they need not both be found within the company. Experience has furthermore shown that neither the locus of innovation nor exploitation need lie within companies’ own boundaries. However, emulation of the open innovation approach transforms a company’s solid boundaries into a semi-permeable membrane that enables innovation to move more easily between the external environment and the company’s internal innovation process. How far the open innovation approach is implemented in practice and whether there are identifiable patterns were the questions we investigated with our empirical study. Based on our own empirical database of 124 companies, we identified three core open innovation processes: (1) The outside-in process: Enriching a company’s own knowledge base through the integration of suppliers, customers, and external knowledge sourcing can increase a company’s innovativeness. (2) The inside-out process: The external exploitation of ideas in different markets, selling IP and multiplying technology by channelling ideas to the external environment. (3) The coupled process: Linking outside-in and inside-out by working in alliances with complementary companies during which give and take are crucial for success. Consequent thinking along the whole value chain and new business models enable this core process.", "title": "" }, { "docid": "d3fc62a9858ddef692626b1766898c9f", "text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.", "title": "" }, { "docid": "b0988b5d33bf97ac4eba7365bce055bd", "text": "This research investigates audience experience of empathy with a performer during a digitally mediated performance. Theatrical performance necessitates social interaction between performers and audience. We present a performance-based study that explores audience awareness of performer's kinaesthetic activity in 2 ways: by isolating the audience's senses (visual, auditory, and kinaesthetic) and by focusing audience perception through defamiliarization. By positioning the performer behind the audience: in their 'backspace', we focus the audience's attention to the performer in an unfamiliar way. We describe two research contributions to the study of audience empathic experience during performance. The first is the development of a phenomenological interview method designed for extracting empirical evaluations of experience of audience members in a performance scenario. The second is a descriptive model for a poetics of reception. Our model is based on an empathetic audience-performer relationship that includes 3 components of audience awareness: contextual, interpersonal, and sense-based. Our research contributions are of particular benefit to performances involving digital media, and can provide insight into audience experience of empathy.", "title": "" }, { "docid": "27ed4433fad92baec6bbbfa003b591b6", "text": "The new generation of high-performance decimal floating-point units (DFUs) is demanding efficient implementations of parallel decimal multipliers. In this paper, we describe the architectures of two parallel decimal multipliers. The parallel generation of partial products is performed using signed-digit radix-10 or radix-5 recodings of the multiplier and a simplified set of multiplicand multiples. The reduction of partial products is implemented in a tree structure based on a decimal multioperand carry-save addition algorithm that uses unconventional (non BCD) decimal-coded number systems. We further detail these techniques and present the new improvements to reduce the latency of the previous designs, which include: optimized digit recoders for the generation of 2n-tuples (and 5-tuples), decimal carry-save adders (CSAs) combining different decimal-coded operands, and carry-free adders implemented by special designed bit counters. Moreover, we detail a design methodology that combines all these techniques to obtain efficient reduction trees with different area and delay trade-offs for any number of partial products generated. Evaluation results for 16-digit operands show that the proposed architectures have interesting area-delay figures compared to conventional Booth radix-4 and radix--8 parallel binary multipliers and outperform the figures of previous alternatives for decimal multiplication.", "title": "" }, { "docid": "1d8e2c9bd9cfa2ce283e01cbbcd6ca83", "text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.", "title": "" }, { "docid": "ff0d818dfd07033fb5eef453ba933914", "text": "Hyperplastic placentas have been reported in several experimental mouse models, including animals produced by somatic cell nuclear transfer, by inter(sub)species hybridization, and by somatic cytoplasm introduction to oocytes followed by intracytoplasmic sperm injection. Of great interest are the gross and histological features common to these placental phenotypes--despite their quite different etiologies--such as the enlargement of the spongiotrophoblast layers. To find morphological clues to the pathways leading to these similar placental phenotypes, we analyzed the ultrastructure of the three different types of hyperplastic placenta. Most cells affected were of trophoblast origin and their subcellular ultrastructural lesions were common to the three groups, e.g., a heavy accumulation of cytoplasmic vacuoles in the trophoblastic cells composing the labyrinthine wall and an increased volume of spongiotrophoblastic cells with extraordinarily dilatated rough endoplasmic reticulum. Although the numbers of trophoblastic glycogen cells were greatly increased, they maintained their normal ultrastructural morphology, including a heavy glycogen deposition throughout the cytoplasm. The fetal endothelium and small vessels were nearly intact. Our ultrastructural study suggests that these three types of placental hyperplasias, with different etiologies, may have common pathological pathways, which probably exclusively affect the development of certain cell types of the trophoblastic lineage during mouse placentation.", "title": "" }, { "docid": "b26d12edbd76ab6e1c5343d75ce74590", "text": "Multilanguage information retrieval promotes users to browse documents in the form of their mother language, and more and more peoples interested in retrieves short answers rather than a full document. In this paper, we present a cross-language video QA system i.e. CLVQ, which could process the English questions, and find answers in Chinese videos. The main contribution of this research are: (1) the application of QA technology into different media; and (2) adopt a new answer finding approach without human-made rules; (3) the combination of several techniques of passage retrieval algorithms. The experimental result shows 56% of answer finding. The testing collection was consists of six discovery movies, and questions are from the School of Discovery Web site.", "title": "" }, { "docid": "8218ce22ac1cccd73b942a184c819d8c", "text": "The extended SMAS facelift techniques gave plastic surgeons the ability to correct the nasolabial fold and medial cheek. Retensioning the SMAS transmits the benefit through the multilinked fibrous support system of the facial soft tissues. The effect is to provide a recontouring of the ptotic soft tissues, which fills out the cheeks as it reduces nasolabial fullness. Indirectly, dermal tightening occurs to a lesser but more natural degree than with traditional facelift surgery. Although details of current techniques may be superseded, the emerging surgical principles are becoming more clearly defined. This article presents these principles and describes the author's current surgical technique.", "title": "" }, { "docid": "429ac6709131b648bb44a6ccaebe6a19", "text": "We highlight a practical yet rarely discussed problem in dialogue state tracking (DST), namely handling unknown slot values. Previous approaches generally assume predefined candidate lists and thus are not designed to output unknown values, especially when the spoken language understanding (SLU) module is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E architecture based on the pointer network (PtrNet) that can effectively extract unknown slot values while still obtains state-of-the-art accuracy on the standard DSTC2 benchmark. We also provide extensive empirical evidence to show that tracking unknown values can be challenging and our approach can bring significant improvement with the help of an effective feature dropout technique.", "title": "" }, { "docid": "ae151d8ed9b8f99cfe22e593f381dd3b", "text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.", "title": "" }, { "docid": "5378e05d2d231969877131a011b3606a", "text": "Environmental, health, and safety (EHS) concerns are receiving considerable attention in nanoscience and nanotechnology (nano) research and development (R&D). Policymakers and others have urged that research on nano's EHS implications be developed alongside scientific research in the nano domain rather than subsequent to applications. This concurrent perspective suggests the importance of early understanding and measurement of the diffusion of nano EHS research. The paper examines the diffusion of nano EHS publications, defined through a set of search terms, into the broader nano domain using a global nanotechnology R&D database developed at Georgia Tech. The results indicate that nano EHS research is growing rapidly although it is orders of magnitude smaller than the broader nano S&T domain. Nano EHS work is moderately multidisciplinary, but gaps in biomedical nano EHS's connections with environmental nano EHS are apparent. The paper discusses the implications of these results for the continued monitoring and development of the cross-disciplinary utilization of nano EHS research.", "title": "" }, { "docid": "43831e29e62c574a93b6029409690bfe", "text": "We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.", "title": "" }, { "docid": "4b9c5c1851909ae31c4510f47cb61a60", "text": "Fraud has been very common in our society, and it affects private enterprises as well as public entities. However, in recent years, the development of new technologies has also provided criminals more sophisticated way to commit fraud and it therefore requires more advanced techniques to detect and prevent such events. The types of fraud in Telecommunication industry includes: Subscription Fraud, Clip on Fraud, Call Forwarding, Cloning Fraud, Roaming Fraud, and Calling Card. Thus, detection and prevention of these frauds is one of the main objectives of the telecommunication industry. In this research, we developed a model that detects fraud in Telecommunication sector in which a random rough subspace based neural network ensemble method was employed in the development of the model to detect subscription fraud in mobile telecoms. This study therefore presents the development of patterns that illustrate the customers’ subscription's behaviour focusing on the identification of non-payment events. This information interrelated with other features produces the rules that lead to the predictions as earlier as possible to prevent the revenue loss for the company by deployment of the appropriate actions.", "title": "" }, { "docid": "42c6eaae2cbdb850f634d987ab7d1cdb", "text": "The main aim of this paper is to solve a path planning problem for an autonomous mobile robot in static and dynamic environments by determining the collision-free path that satisfies the chosen criteria for shortest distance and path smoothness. The algorithm mimics the real world by adding the actual size of the mobile robot to that of the obstacles and formulating the problem as a moving point in the free-space. The proposed path planning algorithm consists of three modules: in the first module, the path planning algorithm forms an optimised path by conducting a hybridized Particle Swarm Optimization-Modified Frequency Bat (PSO-MFB) algorithm that minimises distance and follows path smoothness criteria; in the second module, any infeasible points generated by the proposed PSO-MFB Algorithm are detected by a novel Local Search (LS) algorithm and integrated with the PSO-MFB algorithm to be converted into feasible solutions; the third module features obstacle detection and avoidance (ODA), which is triggered when the mobile robot detects obstacles within its sensing region, allowing it to avoid collision with obstacles. Simulations have been carried out that indicated that this method generates a feasible path even in complex dynamic environments and thus overcomes the shortcomings of conventional approaches such as grid methods. Comparisons with previous examples in the literature are also included in the results.", "title": "" }, { "docid": "0846274e111ccd0867466bbda93f06e6", "text": "Encrypting Internet communications has been the subject of renewed focus in recent years. In order to add end-to-end encryption to legacy applications without losing the convenience of full-text search, ShadowCrypt and Mimesis Aegis use a new cryptographic technique called \"efficiently deployable efficiently searchable encryption\" (EDESE) that allows a standard full-text search system to perform searches on encrypted data. Compared to other recent techniques for searching on encrypted data, EDESE schemes leak a great deal of statistical information about the encrypted messages and the keywords they contain. Until now, the practical impact of this leakage has been difficult to quantify.\n In this paper, we show that the adversary's task of matching plaintext keywords to the opaque cryptographic identifiers used in EDESE can be reduced to the well-known combinatorial optimization problem of weighted graph matching (WGM). Using real email and chat data, we show how off-the-shelf WGM solvers can be used to accurately and efficiently recover hundreds of the most common plaintext keywords from a set of EDESE-encrypted messages. We show how to recover the tags from Bloom filters so that the WGM solver can be used with the set of encrypted messages that utilizes a Bloom filter to encode its search tags. We also show that the attack can be mitigated by carefully configuring Bloom filter parameters.", "title": "" }, { "docid": "cccb38dab9ead68b5c3bd88f03d75cb0", "text": "e múltiplos episódios de sangramento de varizes esofágicas e gástricas, passou por um procedimento de TIPS para controlar a hemorragia gastroesophaneal refratária e como uma ponte para transplante de fígado. Na admissão, ele estava clinicamente estável e tinha estágio final da doença hepática pontuação de 13 e bilirrubina sérica total inicial de 3,7 mg/dl. O procedimento TIPS foi realizada através da veia jugular interna direita, usando a padronização9. O stent selecionado e disponível era de metal autoexpansível Wallstent stent 10 x 68 mm (Boston Scientific Corporation, MA, EUA), que foi devidamente implantado no fígado, criando um shunt entre a veia hepática direita e um dos ramos esquerdos da veia porta. O trajeto pós-stent foi dilatada com balão de 10 mm e venograma portal de controle demonstrou patência de shunt e não opacificação significativa da circulação colateral venosa. Houve redução da pressão venosa portal de 26-16 mm Hg, e do gradiente de pressão portosistêmico 19-9 mmHg. O procedimento transcorreu sem intercorrências e paciente permaneceu no hospital para observação. Três dias depois ele apresentou icterícia súbita sem quaisquer sinais de insuficiência hepática (encefalopatia) ou sepse (febre ou hipotensão). Neste momento, os exames mostraram bilurribina nível total de 41,6 mg/dl (bilirrubina direta de 28,1 mg/dl), a relação internacional de 1/2, fosfatase alcalina de 151 UI/l, alanina aminotransferase de 60 UI/l, de aspartato aminotransferase 104 UI/l, de creatinina de 1,0 mg/dl e contagem de leucócitos totais de 6,800/ml. Doppler do fígado mostrou stent adequado, permeabilidade e fluxo anterógrado, sem evidência de dilatação das vias biliares. Tomografia computadorizada e angiografia abdominais foram realizadas e não forneceram qualquer informação adicional. Uma semana depois, o paciente estava clinicamente inalterado, com exceção de icterícia piorada. Não havia nenhuma evidência de infecção, ou encefalopatia ou hemobilia. Apesar dos testes de laboratório não serem INTRODUÇÃO", "title": "" }, { "docid": "170873ad959b33eea76e9f542c5dbff6", "text": "This paper reports on a development framework, two prototypes, and a comparative study in the area of multi-tag Near-Field Communication (NFC) interaction. By combining NFC with static and dynamic displays, such as posters and projections, services are made more visible and allow users to interact with them easily by interacting directly with the display with their phone. In this paper, we explore such interactions, in particular, the combination of the phone display and large NFC displays. We also compare static displays and dynamic displays, and present a list of deciding factors for a particular deployment situation. We discuss one prototype for each display type and developed a corresponding framework which can be used to accelerate the development of such prototypes whilst supporting a high level of versatility. The findings of a controlled comparative study indicate, among other things, that all participants preferred the dynamic display, although the static display has advantages, e.g. with respect to privacy and portability.", "title": "" } ]
scidocsrr
80e59e581d3e6a5009860a008a170200
Deep video portraits
[ { "docid": "3f41419ad8c6d9b97df9dba3a4bf02ad", "text": "The computer graphics and vision communities have dedicated long standing efforts in building computerized tools for reconstructing, tracking, and analyzing human faces based on visual input. Over the past years rapid progress has been made, which led to novel and powerful algorithms that obtain impressive results even in the very challenging case of reconstruction from a single RGB or RGB-D camera. The range of applications is vast and steadily growing as these technologies are further improving in speed, accuracy, and ease of use. Motivated by this rapid progress, this state-of-the-art report summarizes recent trends in monocular facial performance capture and discusses its applications, which range from performance-based animation to real-time facial reenactment. We focus our discussion on methods where the central task is to recover and track a three dimensional model of the human face using optimization-based reconstruction algorithms. We provide an in-depth overview of the underlying concepts of real-world image formation, and we discuss common assumptions and simplifications that make these algorithms practical. In addition, we extensively cover the priors that are used to better constrain the under-constrained monocular reconstruction problem, and discuss the optimization techniques that are employed to recover dense, photo-geometric 3D face models from monocular 2D data. Finally, we discuss a variety of use cases for the reviewed algorithms in the context of motion capture, facial animation, as well as image and video editing. CCS Concepts •Computing methodologies → Reconstruction; Tracking; Motion capture; Shape modeling; 3D imaging;", "title": "" } ]
[ { "docid": "d84abd378e3756052ede68731d73ca45", "text": "A major difficulty in applying word vector embeddings in information retrieval is in devising an effective and efficient strategy for obtaining representations of compound units of text, such as whole documents, (in comparison to the atomic words), for the purpose of indexing and scoring documents. Instead of striving for a suitable method to obtain a single vector representation of a large document of text, we aim to develop a similarity metric that makes use of the similarities between the individual embedded word vectors in a document and a query. More specifically, we represent a document and a query as sets of word vectors, and use a standard notion of similarity measure between these sets, computed as a function of the similarities between each constituent word pair from these sets. We then make use of this similarity measure in combination with standard information retrieval based similarities for document ranking. The results of our initial experimental investigations show that our proposed method improves MAP by up to 5.77%, in comparison to standard text-based language model similarity, on the TREC 6, 7, 8 and Robust ad-hoc test collections.", "title": "" }, { "docid": "9da1449675af42a2fc75ba8259d22525", "text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud", "title": "" }, { "docid": "ced3d4968f38b724369f2dfcb546151f", "text": "Cancer researchers have long recognized that somatic mutations are not uniformly distributed within genes. However, most approaches for identifying cancer mutations focus on either the entire-gene or single amino-acid level. We have bridged these two methodologies with a multiscale mutation clustering algorithm that identifies variable length mutation clusters in cancer genes. We ran our algorithm on 539 genes using the combined mutation data in 23 cancer types from The Cancer Genome Atlas (TCGA) and identified 1295 mutation clusters. The resulting mutation clusters cover a wide range of scales and often overlap with many kinds of protein features including structured domains, phosphorylation sites, and known single nucleotide variants. We statistically associated these multiscale clusters with gene expression and drug response data to illuminate the functional and clinical consequences of mutations in our clusters. Interestingly, we find multiple clusters within individual genes that have differential functional associations: these include PTEN, FUBP1, and CDH1. This methodology has potential implications in identifying protein regions for drug targets, understanding the biological underpinnings of cancer, and personalizing cancer treatments. Toward this end, we have made the mutation clusters and the clustering algorithm available to the public. Clusters and pathway associations can be interactively browsed at m2c.systemsbiology.net. The multiscale mutation clustering algorithm is available at https://github.com/IlyaLab/M2C.", "title": "" }, { "docid": "58763952946a1d7aec1cf8390526a910", "text": "Recognizing human activities from temporal streams of sensory data observations is a very important task on a wide variety of applications in context recognition. Especially for time-series sensory data, a method that takes into account the inherent sequential characteristics of the data is needed. Moreover, activities are hierarchical in nature, in as much that complex activities can be decomposed to a number of simpler ones. In this paper, we propose a two-stage continuous hidden Markov model (CHMM) approach for the task of activity recognition using accelerometer and gyroscope sensory data gathered from a smartphone. The proposed method consists of first-level CHMMs for coarse classification, which separates stationary and moving activities, and second-level CHMMs for fine classification, which classifies the data into their corresponding activity classes. Random Forests (RF) variable importance measures are exploited to determine the optimal feature subsets for both coarse and fine classification. Experiments show that with the use of a significantly reduced number of features, the proposed method shows competitive performance in comparison to other classification algorithms, achieving an over-all accuracy of 91.76%.", "title": "" }, { "docid": "4ffc94f329b404b89b86df07f8503866", "text": "A new isolated push-pull very high frequency (VHF) resonant DC-DC converter is proposed. The primary side of the converter is a push-pull topology derived from the Class EF2 inverter. The secondary side is a class E based low dv/dt full-wave rectifier. A two-channel multi-stage resonant gate driver is applied to provide two complementary drive signals. The advantages of the converter are as follows: 1) the power isolation is achieved; 2) the MOSFETs and diodes are under soft-switching condition for high efficiency; 3) the voltage stress of the MOSFET is much reduced; 4) the parasitic inductance and capacitance can be absorbed. A 30~36 VDC input, 50-W/ 24-VDC output, 30-MHz prototype has been built to verify the functionality.", "title": "" }, { "docid": "b9fb60fadf13304b46f87fda305f118e", "text": "Coordinated cyberattacks of power meter readings can be arranged to be undetectable by any bad data detection algorithm in the power system state estimation process. These unobservable attacks present a potentially serious threat to grid operations. Of particular interest are sparse attacks that involve the compromise of a modest number of meter readings. An efficient algorithm to find all unobservable attacks [under standard DC load flow approximations] involving the compromise of exactly two power injection meters and an arbitrary number of line power meters is presented. This requires O(n2m) flops for a power system with n buses and m line meters. If all lines are metered, there exist canonical forms that characterize all 3, 4, and 5-sparse unobservable attacks. These can be quickly detected in power systems using standard graph algorithms. Known-secure phasor measurement units [PMUs] can be used as countermeasures against an arbitrary collection of cyberattacks. Finding the minimum number of necessary PMUs is NP-hard. It is shown that p + 1 PMUs at carefully chosen buses are sufficient to neutralize a collection of p cyberattacks.", "title": "" }, { "docid": "4156c9e17390659ec7a1c3f20d9b6e1e", "text": "An e-commerce catalog typically comprises of specifications for millions of products. The search engine receives millions of sales offers from thousands of independent merchants that must be matched to the right products. We describe the challenges that a system for matching unstructured offers to structured product descriptions must address, drawing upon our experience from building such a system for Bing Shopping. The heart of our system is a data-driven component that learns the matching function off-line, which is then applied at run-time for matching offers to products. We provide the design of this and other critical components of the system as well as the details of the extensive experiments we performed to assess the readiness of the system. This system is currently deployed in an experimental Commerce Search Engine and is used to match all the offers received by Bing Shopping to the Bing product catalog.", "title": "" }, { "docid": "249a09e24ce502efb4669603b54b433d", "text": "Deep Neural Networks (DNNs) are universal function approximators providing state-ofthe-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. 1 ar X iv :1 71 0. 09 30 2v 3 [ st at .M L ] 6 N ov 2 01 7", "title": "" }, { "docid": "7a300ee432682af17ff338fc7d2ff778", "text": "Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization.", "title": "" }, { "docid": "7cf9a786a803e19325df942a51c12d20", "text": "Homotopy type theory is an extension of Martin-Löf type theory, based on a correspondence with homotopy theory and higher category theory. In homotopy type theory, the propositional equality type becomes proof-relevant, and corresponds to paths in a space. This allows for a new class of datatypes, called higher inductive types, which are specified by constructors not only for points but also for paths. In this paper, we consider a programming application of higher inductive types. Version control systems such as Darcs are based on the notion of patches - syntactic representations of edits to a repository. We show how patch theory can be developed in homotopy type theory. Our formulation separates formal theories of patches from their interpretation as edits to repositories. A patch theory is presented as a higher inductive type. Models of a patch theory are given by maps out of that type, which, being functors, automatically preserve the structure of patches. Several standard tools of homotopy theory come into play, demonstrating the use of these methods in a practical programming context.", "title": "" }, { "docid": "5c48c8a2a20408775f5eaf4f575d5031", "text": "In this paper we present a computational cognitive model of task interruption and resumption, focusing on the effects of the problem state bottleneck. Previous studies have shown that the disruptiveness of interruptions is for an important part determined by three factors: interruption duration, interrupting-task complexity, and moment of interruption. However, an integrated theory of these effects is still missing. Based on previous research into multitasking, we propose a first step towards such a theory in the form of a process model that attributes these effects to problem state requirements of both the interrupted and the interrupting task. Subsequently, we tested two predictions of this model in two experiments. The experiments confirmed that problem state requirements are an important predictor for the disruptiveness of interruptions. This suggests that interfaces should be designed to a) interrupt users at low-problem state moments and b) maintain the problem state for the user when interrupted.", "title": "" }, { "docid": "cf2e477275b66656531803ac43411eff", "text": "In the context of text categorization, Centroid Classifier has proved to be a simple and yet efficient method. However, it often suffers from the inductive bias or model misfit incurred by its assumption. In order to address this issue, we propose a novel batch-updated approach to enhance the performance of Centroid Classifier. The main idea behind this method is to take advantage of training errors to successively update the classification model by batch. The technique is simple to implement and flexible to text data. The experimental results indicate that the technique can significantly improve the performance of Centroid Classifier. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "48eacd86c14439454525e5a570db083d", "text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.", "title": "" }, { "docid": "aa25ab7078969c54d84aa7e4b2650f9e", "text": "Informative art is computer augmented, or amplified, works of art that not only are aesthetical objects but also information displays, in as much as they dynamically reflect information about their environment. Informative art can be seen as a kind of slow technology, i.e. a technology that promotes moments of concentration and reflection. Our aim is to present the design space of informative art. We do so by discussing its properties and possibilities in relation to work on information visualisation, novel information display strategies, as well as art. A number of examples based on different kinds of mapping relations between information and the properties of the composition of an artwork are described.", "title": "" }, { "docid": "198a0ecb1a1bd4f0a7e4dc757c49ea3d", "text": "There have been a number of studies that have examined the factor structure of the Wechsler Adult Intelligence Scale IV (WAIS-IV) using the standardization sample. In this study, we investigate its factor structure on a clinical neuropsychology sample of mixed aetiology. Correlated factor, higher-order and bi-factor models are all tested. Overall, the results suggest that the WAIS-IV will be suitable for use with this population.", "title": "" }, { "docid": "a2cdcd9400c2c6663b3672e9cf8d41f6", "text": "The use of immersive virtual reality (VR) systems in museums is a recent trend, as the development of new interactive technologies has inevitably impacted the more traditional sciences and arts. This is more evident in the case of novel interactive technologies that fascinate the broad public, as has always been the case with virtual reality. The increasing development of VR technologies has matured enough to expand research from the military and scientific visualization realm into more multidisciplinary areas, such as education, art and entertainment. This paper analyzes the interactive virtual environments developed at an institution of informal education and discusses the issues involved in developing immersive interactive virtual archaeology projects for the broad public.", "title": "" }, { "docid": "0fc50684d7bb4b4eba85bbd474a6548e", "text": "Failure of corollary discharge, a mechanism for distinguishing self-generated from externally generated percepts, has been posited to underlie certain positive symptoms of schizophrenia, including auditory hallucinations. Although originally described in the visual system, corollary discharge may exist in the auditory system, whereby signals from motor speech commands prepare auditory cortex for self-generated speech. While associated with sensorimotor systems, it might also apply to inner speech or thought, regarded as our most complex motor act. In this paper, we describe the results of a series of studies in which we have shown that: (1) event-related brain potentials (ERPs) can be used to demonstrate the corollary discharge phenomenon during talking, (2) corollary discharge is abnormal in patients with schizophrenia, (3) EEG gamma band coherence between frontal and temporal lobes is greater during talking than listening and is disrupted by distorted feedback during talking in normals, and (4) patients with schizophrenia do not show this pattern for EEG gamma coherence. While these studies have identified ERPs and EEG gamma coherence indices of the efference copy/corollary discharge system and documented abnormalities in these systems in patients with schizophrenia, we have so far had limited success in establishing a relationship between these neurobiologic indicators of corollary discharge abnormality and reports of hallucinations in patients.", "title": "" }, { "docid": "7399a8096f56c46a20715b9f223d05bf", "text": "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches", "title": "" }, { "docid": "ba9dfc0f4c54ffa0ac6ad92ada9fec83", "text": "Ontologies as means for conceptualizing and structuring domain knowledge within a community of interest are seen as a key to realize the Semantic Web vision. However, the decentralized nature of the Web makes achieving this consensus across communities difficult, thus, hampering efficient knowledge sharing between them. In order to balance the autonomy of each community with the need for interoperability, mapping mechanisms between distributed ontologies in the Semantic Web are required. In this paper we present MAFRA, an interactive, incremental and dynamic framework for mapping distributed ontologies.", "title": "" }, { "docid": "c09391a25defcb797a7c8da3f429fafa", "text": "BACKGROUND\nTo examine the postulated relationship between Ambulatory Care Sensitive Conditions (ACSC) and Primary Health Care (PHC) in the US context for the European context, in order to develop an ACSC list as markers of PHC effectiveness and to specify which PHC activities are primarily responsible for reducing hospitalization rates.\n\n\nMETHODS\nTo apply the criteria proposed by Solberg and Weissman to obtain a list of codes of ACSC and to consider the PHC intervention according to a panel of experts. Five selection criteria: i) existence of prior studies; ii) hospitalization rate at least 1/10,000 or 'risky health problem'; iii) clarity in definition and coding; iv) potentially avoidable hospitalization through PHC; v) hospitalization necessary when health problem occurs. Fulfilment of all criteria was required for developing the final ACSC list. A sample of 248,050 discharges corresponding to 2,248,976 inhabitants of Catalonia in 1996 provided hospitalization rate data. A Delphi survey was performed with a group of 44 experts reviewing 113 ICD diagnostic codes (International Classification of Diseases, 9th Revision, Clinical Modification), previously considered to be ACSC.\n\n\nRESULTS\nThe five criteria selected 61 ICD as a core list of ACSC codes and 90 ICD for an expanded list.\n\n\nCONCLUSIONS\nA core list of ACSC as markers of PHC effectiveness identifies health conditions amenable to specific aspects of PHC and minimizes the limitations attributable to variations in hospital admission policies. An expanded list should be useful to evaluate global PHC performance and to analyse market responsibility for ACSC by PHC and Specialist Care.", "title": "" } ]
scidocsrr
0782e2213393dd19382e91143f015b28
Supporting data quality management in decision-making
[ { "docid": "5caedb986844afcd40b5deb9ca8ba116", "text": "We present here because it will be so easy for you to access the internet service. As in this new era, much technology is sophistically offered by connecting to the internet. No any problems to face, just for this day, you can really keep in mind that the book is the best book for you. We offer the best here to read. After deciding how your feeling will be, you can enjoy to visit the link and get the book.", "title": "" } ]
[ { "docid": "8581de718d41373ee4250a300e675fb4", "text": "It seems almost impossible to overstate the power of words; they literally have changed and will continue to change the course of world history. Perhaps the greatest tools we can give students for succeeding, not only in their education but more generally in life, is a large, rich vocabulary and the skills for using those words. Our ability to function in today’s complex social and economic worlds is mightily affected by our language skills and word knowledge. In addition to the vital importance of vocabulary for success in life, a large vocabulary is more specifically predictive and reflective of high levels of reading achievement. The Report of the National Reading Panel (2000), for example, concluded, “The importance of vocabulary knowledge has long been recognized in the development of reading skills. As early as 1924, researchers noted that growth in reading power relies on continuous growth in word knowledge” (pp. 4–15). Vocabulary or Vocabularies?", "title": "" }, { "docid": "36b4c028bcd92115107cf245c1e005c8", "text": "CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.", "title": "" }, { "docid": "44258b538f61434d66dbde7f989e9c82", "text": "Studies in animals showed that stress results in damage to the hippocampus, a brain area involved in learning and memory, with associated memory deficits. The mechanism involves glucocorticoids and possibly serotonin acting through excitatory amino acids to mediate hippocampal atrophy. Patients with posttraumatic stress disorder (PTSD) from Vietnam combat and childhood abuse had deficits on neuropsychological measures that have been validated as probes of hippocampal function. In addition, magnetic resonance imaging (MRI) showed reduction in volume of the hippocampus in both combat veterans and victims of childhood abuse. In combat veterans, hippocampal volume reduction was correlated with deficits in verbal memory on neuropsychological testing. These studies introduce the possibility that experiences in the form of traumatic stressors can have long-term effects on the structure and function of the brain.", "title": "" }, { "docid": "31f6eaae19d29b921c92c5fdfd6e279e", "text": "We investigate the preemptive scheduling of periodic, real-time task systems on one processor. First, we show that when all parameters to the system are integers, we may assume without loss of generality that all preemptions occur at integer time values. We then assume, for the remainder of the paper, that all parameters are indeed integers. We then give, as our main lemma, both necessary and sufficient conditions for a task system to be feasible on one processor. Although these conditions cannot, in general, be tested efficiently (unless P=NP), they do allow us to give efficient algorithms for deciding feasibility on one processor for certain types of periodic task systems. For example, we give a pseudo-polynomial-time algorithm for synchronous systems whose densities are bounded by a fixed constant less than 1. This algorithm represents an exponential improvement over the previous best algorithm. We also give a polynomial-time algorithm for systems having a fixed number of distinct types of tasks. Furthermore, we are able to use our main lemma to show that the feasibility problem for task systems on one processor is co-NP-complete in the strong sence. In order to show this last result, we first show the Simultaneous Congruences Problem to be NP-complete in the strong sense. Both of these last two results answer questions that have been open for ten years. We conclude by showing that for incomplete task systems, that is, task systems in which the start times are not specified, the feasibility problem is ∑ 2 p -complete.", "title": "" }, { "docid": "842d06943ac9ad55ef90d2a4a3c65ed4", "text": "The abundance of memory corruption and disclosure vulnerabilities in kernel code necessitates the deployment of hardening techniques to prevent privilege escalation attacks. As more strict memory isolation mechanisms between the kernel and user space, like Intel's SMEP, become commonplace, attackers increasingly rely on code reuse techniques to exploit kernel vulnerabilities. Contrary to similar attacks in more restrictive settings, such as web browsers, in kernel exploitation, non-privileged local adversaries have great flexibility in abusing memory disclosure vulnerabilities to dynamically discover, or infer, the location of certain code snippets and construct code-reuse payloads. Recent studies have shown that the coupling of code diversification with the enforcement of a \"read XOR execute\" (R^X) memory safety policy is an effective defense against the exploitation of userland software, but so far this approach has not been applied for the protection of the kernel itself.\n In this paper, we fill this gap by presenting kR^X: a kernel hardening scheme based on execute-only memory and code diversification. We study a previously unexplored point in the design space, where a hypervisor or a super-privileged component is not required. Implemented mostly as a set of GCC plugins, kR^X is readily applicable to the x86-64 Linux kernel and can benefit from hardware support (e.g., MPX on modern Intel CPUs) to optimize performance. In full protection mode, kR^X incurs a low runtime overhead of 4.04%, which drops to 2.32% when MPX is available.", "title": "" }, { "docid": "96f2e93e188046fa1d97cedc51b07808", "text": "The development of next-generation electrical link technology to support 400Gb/s standards is underway [1-5]. Physical constraints paired to the small area available to dissipate heat, impose limits to the maximum number of serial interfaces and therefore their minimum speed. As such, aggregation of currently available 25Gb/s systems is not an option, and the migration path requires serial interfaces to operate at increased rates. According to CEI-56G and IEEE P802.3bs emerging standards, PAM-4 signaling paired to forward error correction (FEC) schemes is enabling several interconnect applications and low-loss profiles [1]. Since the amplitude of each eye is reduced by a factor of 3, while noise power is only halved, a high transmitter (TX) output amplitude is key to preserve high SNR. However, compared to NRZ, the design of a PAM-4 TX is challenged by tight linearity constraints, required to minimize the amplitude distortion among the 4 levels [1]. In principle, current-mode (CM) drivers can deliver a differential peak-to-peak swing up to 4/3(VDD-VOV), but they struggle to generate high-swing PAM-4 levels with the required linearity. This is confirmed by recently published CM PAM-4 drivers, showing limited output swings even with VDD raised to 1.5V [2-4]. Source-series terminated (SST) drivers naturally feature better linearity and represent a valid alternative, but the maximum differential peak-to-peak swing is bounded to VDD only. In [5], a dual-mode SST driver supporting NRZ/PAM-4 was presented, but without FFE for PAM-4 mode. In this paper, we present a PAM-4 transmitter leveraging a hybrid combination of SST and CM driver. The CM part enhances the output swing by 30% beyond the theoretical limit of a conventional SST implementation, while being calibrated to maintain the desired linearity level. A 5b 4-tap FIR filter, where equalization tuning can be controlled independently from output matching, is also embedded. The transmitter, implemented in 28nm CMOS FDSOI, incorporates a half-rate serializer, duty-cycle correction (DCC), ≫2kV HBM ESD diodes, and delivers a full swing of 1.3Vppd at 45Gb/s while drawing 120mA from a 1V supply. The power efficiency is ~2 times better than those compared in this paper.", "title": "" }, { "docid": "a74fc2476ec43b07eccfe2be1c9ef2cb", "text": "In this paper, a broadband high efficiency Class-AB balanced power amplifier (PA) is presented. The proposed PA offers a high efficiency of > 43% for a band of 400 MHz to 1.4 GHz. The broadband matching circuits were realized with microstrip-radial-stubs (MRS) on low loss Rogers 5880 substrate with 0.78 mm thickness and 2.2 dielectric constant. The input and output matching is better than −13 dB throughout the band. The PA delivers maximum output power of 41.5 dBm with a flat gain of 11.4–13.5 dB. Due to high gain, stability, efficiency, and broadband, the proposed PA is thus suitable for recent and upcoming wireless communication systems.", "title": "" }, { "docid": "3fd8092faee792a316fb3d1d7c2b6244", "text": "The complete dynamics model of a four-Mecanum-wheeled robot considering mass eccentricity and friction uncertainty is derived using the Lagrange’s equation. Then based on the dynamics model, a nonlinear stable adaptive control law is derived using the backstepping method via Lyapunov stability theory. In order to compensate for the model uncertainty, a nonlinear damping term is included in the control law, and the parameter update law with σ-modification is considered for the uncertainty estimation. Computer simulations are conducted to illustrate the suggested control approach.", "title": "" }, { "docid": "defb837e866948e5e092ab64476d33b5", "text": "Recent multicoil polarised pads called Double D pads (DDP) and Bipolar Pads (BPP) show excellent promise when used in lumped charging due to having single sided fields and high native Q factors. However, improvements to field leakage are desired to enable higher power transfer while keeping the leakage flux within ICNIRP levels. This paper proposes a method to reduce the leakage flux which a lumped inductive power transfer (IPT) system exhibits by modifying the ferrite structure of its pads. The DDP and BPP pads ferrite structures are both modified by extending them past the ends of the coils in each pad with the intention of attracting only magnetic flux generated by the primary pad not coupled onto the secondary pad. Simulated improved ferrite structures are validated through practical measurements.", "title": "" }, { "docid": "5b7483a4dea12d8b07921c150ccc66ee", "text": "OBJECTIVE\nWe reviewed the efficacy of occupational therapy-related interventions for adults with rheumatoid arthritis.\n\n\nMETHOD\nWe examined 51 Level I studies (19 physical activity, 32 psychoeducational) published 2000-2014 and identified from five databases. Interventions that focused solely on the upper or lower extremities were not included.\n\n\nRESULTS\nFindings related to key outcomes (activities of daily living, ability, pain, fatigue, depression, self-efficacy, disease symptoms) are presented. Strong evidence supports the use of aerobic exercise, resistive exercise, and aquatic therapy. Mixed to limited evidence supports dynamic exercise, Tai Chi, and yoga. Among the psychoeducation interventions, strong evidence supports the use of patient education, self-management, cognitive-behavioral approaches, multidisciplinary approaches, and joint protection, and limited or mixed evidence supports the use of assistive technology and emotional disclosure.\n\n\nCONCLUSION\nThe evidence supports interventions within the scope of occupational therapy practice for rheumatoid arthritis, but few interventions were occupation based.", "title": "" }, { "docid": "f028a403190899f96fcd6d6f9efbd2f1", "text": "It is aimed to design a X-band monopulse microstrip antenna array that can be used almost in all modern tracking radars and having superior properties in angle detection and angular accuracy than the classical ones. In order to create a monopulse antenna array, a rectangular microstrip antenna is designed and 16 of it gathered together using the nonlinear central feeding to suppress the side lobe level (SLL) of the antenna. The monopulse antenna is created by the combining 4 of these 4×4 array antennas with a microstrip comparator designed using four branch line coupler. Good agreement is noted between the simulation and measurement results.", "title": "" }, { "docid": "1b777ff8e7c30c23e7cc827ec3aee0bc", "text": "The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Recent work has shown state-of-the-art results by partitioning the pose space and using strong nonlinear classifiers such that the pose dependence and multi-modal nature of body part appearance can be captured. We propose to extend these methods to handle much larger quantities of training data, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a significant increase in pose estimation accuracy, while simultaneously reducing computational expense by a factor of 10, and contribute a dataset of 10,000 highly articulated poses.", "title": "" }, { "docid": "b1ba519ffe5321d9ab92ebed8d9264bb", "text": "OBJECTIVES\nThe purpose of this study was to establish reference charts of fetal biometric parameters measured by 2-dimensional sonography in a large Brazilian population.\n\n\nMETHODS\nA cross-sectional retrospective study was conducted including 31,476 low-risk singleton pregnancies between 18 and 38 weeks' gestation. The following fetal parameters were measured: biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight. To assess the correlation between the fetal biometric parameters and gestational age, polynomial regression models were created, with adjustments made by the determination coefficient (R(2)).\n\n\nRESULTS\nThe means ± SDs of the biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight measurements at 18 and 38 weeks were 4.2 ± 2.34 and 9.1 ± 4.0 cm, 15.3 ± 7.56 and 32.3 ± 11.75 cm, 13.3 ± 10.42 and 33.4 ± 20.06 cm, 2.8 ± 2.17 and 7.2 ± 3.58 cm, and 256.34 ± 34.03 and 3169.55 ± 416.93 g, respectively. Strong correlations were observed between all fetal biometric parameters and gestational age, best represented by second-degree equations, with R(2) values of 0.95, 0.96, 0.95, 0.95, and 0.95 for biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight.\n\n\nCONCLUSIONS\nFetal biometric parameters were determined for a large Brazilian population, and they may serve as reference values in cases with a high risk of intrauterine growth disorders.", "title": "" }, { "docid": "8bdbf6fc33bc0b2cb5911683c13912a0", "text": "The breaking of solid objects, like glass or pottery, poses a complex problem for computer animation. We present our methods of using physical simulation to drive the animation of breaking objects. Breakage is obtaned in a three-dimensional flexible model as the limit of elastic behavior. This article describes three principal features of the model: a breakage model, a collision-detection/response scheme, and a geometric modeling method. We use networks of point masses connected by springs to represent physical objects that can bend and break. We present effecient collision-detection algorithms, appropriate for simulating the collisions between the various pieces that interact in breakage. The capability of modeling real objects is provided by a technique of building up composite structures from simple lattice models. We applied these methods to animate the breaking of a teapot and other dishware activities in the animationTipsy Turvy shown at Siggraph '89. Animation techniques that rely on physical simulation to control the motion of objects are discussed, and further topics for research are presented.", "title": "" }, { "docid": "a8f8a3ff73b3cf0c6f415fb4008105a7", "text": "A variety of ontologies are used to define and represent knowledge in many domains. Many ontological approaches have been successfully applied in the field of Requirements Engineering. In order to successfully harness the disparate ontologies, researchers have focused on various ontology merging techniques. However, no serious attempts have been made in the area of Requirements Elicitation where ontology merging has the potential to be quite effective in generating requirements specifications quickly through the means of reasoning based on combined ontologies. This paper attempts to define an approach needed to effectively combine ontologies to enhance the Requirements Elicitation process. A methodology is proposed whereby domain knowledge encapsulated in existing ontologies is combined with an ontology being developed to capture the requirements. Using this, requirements engineers would be able to create more refined Requirements Deliverables.", "title": "" }, { "docid": "ebc1e12f85c6b03de14b1170f450d3f8", "text": "Mobility disability is becoming prevalent in the obese older population (> or = 60 years of age). We included a total of 13 cross-sectional and 15 longitudinal studies based on actual physical assessments of mobility in the obese older population in this review. We systematically examined existing evidence of which adiposity estimate best predicted mobility disability. Cross-sectional studies (82-4000 participants) showed poorer lower extremity mobility with increasing obesity severity in both men and women. All longitudinal studies (1-22 years) except for one, reported relationships between adiposity and declining mobility. While different physical tests made interpretation challenging, a consistent finding was that walking, stair climbing and chair rise ability were compromised with obesity, especially if the body mass index (BMI) exceeded 35 kg m(-2). More studies found that obese women were at an increased risk for mobility impairment than men. Existing evidence suggests that BMI and waist circumference are emerging as the more consistent predictors of the onset or worsening of mobility disability. Limited interventional evidence shows that weight loss is related with increased mobility and lower extremity function. Additional longitudinal studies are warranted that address overall body composition fat and muscle mass or change on future disability.", "title": "" }, { "docid": "bf10806c9f2270a6958e38be6f640e1d", "text": "Multivariate time series data can be found in many application domains. Examples include data from computer networks, healthcare, social networks, or financial markets. Often, patterns in such data evolve over time among multiple dimensions and are hard to detect. Dimensionality reduction methods such as PCA and MDS allow analysis and visualization of multivariate data, but per se do not provide means to explore multivariate patterns over time. We propose Temporal Multidimensional Scaling (TMDS), a novel visualization technique that computes temporal one-dimensional MDS plots for multivariate data which evolve over time. Using a sliding window approach, MDS is computed for each data window separately, and the results are plotted sequentially along the time axis, taking care of plot alignment. Our TMDS plots enable visual identification of patterns based on multidimensional similarity of the data evolving over time. We demonstrate the usefulness of our approach in the field of network security and show in two case studies how users can iteratively explore the data to identify previously unknown, temporally evolving patterns.", "title": "" }, { "docid": "406c07534f083ebb4a71951a29292f2d", "text": "Recurrent neural networks (RNNs) are capable of modeling the temporal dynamics of complex sequential information. However, the structures of existing RNN neurons mainly focus on controlling the contributions of current and historical information but do not explore the different importance levels of different elements in an input vector of a time slot. We propose adding a simple yet effective Element-wiseAttention Gate (EleAttG) to an RNN block (e.g., all RNN neurons in a network layer) that empowers the RNN neurons to have the attentiveness capability. For an RNN block, an EleAttG is added to adaptively modulate the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Specifically, the modulation of the input is content adaptive and is performed at fine granularity, being element-wise rather than input-wise. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to the action recognition tasks on both 3D human skeleton data and RGB videos. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly boosts the power of RNNs.", "title": "" }, { "docid": "642eaca36d3e2045ff07a24ffd82f69c", "text": "BACKGROUND\nTo ensure accurate implementation of stabilization exercises in rehabilitation, physical therapists need to understand the muscle activation patterns of prescribed exercise.\n\n\nOBJECTIVE\nCompare muscle activity during eight trunk and lumbar spine stabilization exercises of the Functional Kinetics concept by Klein-Vogelbach.\n\n\nMETHODS\nA controlled laboratory study with a single-group repeated-measures design was utilized to analyze surface electromyographic intensities of 14 female and 6 male young healthy participants performing eight exercises. Data were captured from the rectus abdominis, external/internal oblique and lumbar paraspinalis. The normalized muscle activation levels (maximum voluntary isometric contraction, MVIC) for three repetitions during each exercise and muscle were analyzed.\n\n\nRESULTS\nSide bridging (28 ± 20%MVIC) and advanced planking (29 ± 20%MVIC) reached the highest activity in the rectus abdominis. For external and internal oblique muscles, side bridging also showed the greatest activity of 99 ± 36%MVIC and 52 ± 25%MVIC, respectively. Apart from side bridging (52 ± 14%MVIC), the supine roll-out (31 ± 12%MVIC) and prone roll-out (31 ± 9%MVIC) showed the greatest activity for the paraspinalis. The advanced quadruped, seated back extension and flexion on chair/Swiss Ball, prone roll-out and advanced one-leg back bridging only yielded negligible muscle activities for the rectus abdominis (< 5%MVIC).\n\n\nCONCLUSION\nBased on the data obtained, recommendations for selective trunk muscle activation during eight stabilization exercises were established, which will guide physical therapists in the development of exercises tailored to the needs of their patients.", "title": "" }, { "docid": "042be132d3e4e99fa5fb3c2efda99d93", "text": "In this research relevant areas that are important in Information System Security have been reviewed based on the health care industry of Malaysia. Some concepts such as definition of Information System Security, System Security Goals, System Security Threats and human error have been studied. The Human factors that are effective on Information System Security have been highlighted and also some relevant models have been introduced. Reviewing the pervious factors helped to find out the Health Information System factors. Finally, the effective human factors on Health Information System have been identified and the structure of Healthcare industry has been studied. Moreover, these factors are categorized in three new groups: Organizational Factors, Motivational Factors and Learning. This information will help to design a framework in Health Information System. 1.Introduction With less concern for people and organizational issues, a major part of information systems security strategies are technical in nature. As a consequence, since most information systems security strategies are of importance as they concentrate on technical oriented solutions, for instance checklists, risk analysis and assessment techniques, there is a necessity to investigate other ways of managing information systems security as they tend to disregard the social factors of risks and the informal structures of organizations. This investigation concentrates chiefly on human and organizational factors within the computer and information security system. The impact on security can be drastic if human and organizational factors influence their employment and use, irrespective of the power of technical controls (Bishop, 2002). In this aspect, the juncture for computer and information security vulnerabilities may be set by vulnerable computer and information security protection (e.g., weak passwords or poor usability) and malicious intentions may appear. The results of blemished organizational policies and individual practices whose origins are deeply rooted within early design presumptions or managerial choices causes susceptibilities (Besnard and Arief, 2004). Health Information System (HIS) has been implemented in Malaysia since late 1990s. HIS is an integration of several hospitals' information system to manage administration works, patients and clinical records. Because it is easy to access HIS data through the internet so its vulnerability to misuses, data lost and attacks will increase. Health data is very sensitive, therefore they require high protection and information security must be carefully watched as it plays an important role to protect the data from being stolen or harmed. Despite the vast research in information security, the human factor has been neglected …", "title": "" } ]
scidocsrr
a2fcbc8d8b64d9f6b8a65d4771c4e36f
Contactless EMG sensors embroidered onto textile
[ { "docid": "991ab90963355f16aa2a83655577ba54", "text": "Highly durable, flexible, and even washable multilayer electronic circuitry can be constructed on textile substrates, using conductive yarns and suitably packaged components. In this paper we describe the development of e-broidery (electronic embroidery, i.e., the patterning of conductive textiles by numerically controlled sewing or weaving processes) as a means of creating computationally active textiles. We compare textiles to existing flexible circuit substrates with regard to durability, conformability, and wearability. We also report on: some unique applications enabled by our work; the construction of sensors and user interface elements in textiles; and a complete process for creating flexible multilayer circuits on fabric substrates. This process maintains close compatibility with existing electronic components and design tools, while optimizing design techniques and component packages for use in textiles. E veryone wears clothing. It conveys a sense of the wearer's identity, provides protection from the environment, and supplies a convenient way to carry all the paraphernalia of daily life. Of course, clothing is made from textiles, which are themselves among the first composite materials engineered by humans. Textiles have mechanical, aesthetic, and material advantages that make them ubiquitous in both society and industry. The woven structure of textiles and spun fibers makes them durable, washable, and conformal, while their composite nature affords tremendous variety in their texture, for both visual and tactile senses. Sadly, not everyone wears a computer, although there is presently a great deal of interest in \" wear-able computing. \" 1 Wearable computing may be seen as the result of a design philosophy that integrates embedded computation and sensing into everyday life to give users continuous access to the capabilities of personal computing. Ideally, computers would be as convenient, durable, and comfortable as clothing, but most wearable computers still take an awkward form that is dictated by the materials and processes traditionally used in electronic fabrication. The design principle of packaging electronics in hard plastic boxes (no matter how small) is pervasive, and alternatives are difficult to imagine. As a result, most wearable computing equipment is not truly wearable except in the sense that it fits into a pocket or straps onto the body. What is needed is a way to integrate technology directly into textiles and clothing. Furthermore, textile-based computing is not limited to applications in wearable computing; in fact, it is broadly applicable to ubiquitous computing, allowing the integration of interactive elements into furniture and decor in general. In …", "title": "" } ]
[ { "docid": "39070a1f503e60b8709050fc2a250378", "text": "Plants in their natural habitats adapt to drought stress in the environment through a variety of mechanisms, ranging from transient responses to low soil moisture to major survival mechanisms of escape by early flowering in absence of seasonal rainfall. However, crop plants selected by humans to yield products such as grain, vegetable, or fruit in favorable environments with high inputs of water and fertilizer are expected to yield an economic product in response to inputs. Crop plants selected for their economic yield need to survive drought stress through mechanisms that maintain crop yield. Studies on model plants for their survival under stress do not, therefore, always translate to yield of crop plants under stress, and different aspects of drought stress response need to be emphasized. The crop plant model rice ( Oryza sativa) is used here as an example to highlight mechanisms and genes for adaptation of crop plants to drought stress.", "title": "" }, { "docid": "0fa189109e1a1f85bb66bd85dc91a75d", "text": "Deep neural networks (DNN) achieved significant breakthrough in vision recognition in 2012 and quickly became the leading machine learning algorithm in Big Data based large scale object recognition applications. The successful deployment of DNN based applications pose challenges for a cross platform software framework that enable multiple user scenarios, including offline model training on HPC clusters and online recognition in embedded environments. Existing DNN frameworks are mostly focused on a closed format CUDA implementations, which is limiting of deploy breadth of DNN hardware systems.\n This paper presents OpenCL™ caffe, which targets in transforming the popular CUDA based framework caffe [1] into open standard OpenCL backend. The goal is to enable a heterogeneous platform compatible DNN framework and achieve competitive performance based on OpenCL tool chain. Due to DNN models' high complexity, we use a two-phase strategy. First we introduce the OpenCL porting strategies that guarantee algorithm convergence; then we analyze OpenCL's performance bottlenecks in DNN domain and propose a few optimization techniques including batched manner data layout and multiple command queues to better map the problem size into existing BLAS library, improve hardware resources utilization and boost OpenCL runtime efficiency.\n We verify OpenCL caffe's successful offline training and online recognition on both server-end and consumer-end GPUs. Experimental results show that the phase-two's optimized OpenCL caffe achieved a 4.5x speedup without modifying BLAS library. The user can directly run mainstream DNN models and achieves the best performance for a specific processors by choosing the optimal batch number depending on H/W properties and input data size.", "title": "" }, { "docid": "680fdf59b65820f20b3e44f7f2a30ed2", "text": "We propose a self-supervised approach for learning representations of relationships between humans and their environment, including object interactions, attributes, and body pose, entirely from unlabeled videos recorded from multiple viewpoints (Fig. 2). We train an embedding with a triplet loss that contrasts a pair of simultaneous frames from different viewpoints with temporally adjacent and visually similar frames (Fig. 1). We call this model Time- Contrastive Networks (TCN). The contrastive signal encourages the model to discover meaningful dimensions and attributes that can explain the changing state of objects and the world from visually similar frames while learning invariance to viewpoint, occlusions, motion blur, lighting, background. The experimental evaluation of our multiviewpoint embedding technique examines its application to reasoning about object interactions, as well as human pose imitation with a real robot. We demonstrate that our model can correctly identify corresponding steps in complex object interactions, such as pouring (Table 1), between different videos and with different instances. We also show what is, to the best of our knowledge, the first self-supervised results for end-to-end imitation learning of human motions with a real robot (Table 2). Results are best visualized in videos available at 1 and the full paper is available at 2.", "title": "" }, { "docid": "d277a7e6a819af474b31c7a35b9c840f", "text": "Blending face geometry in different expressions is a popular approach for facial animation in films and games. The quality of the animation relies on the set of blend shape expressions, and creating sufficient blend shapes takes a large amount of time and effort. This paper presents a complete pipeline to create a set of blend shapes in different expressions for a face mesh having only a neutral expression. A template blend shapes model having sufficient expressions is provided and the neutral expression of the template mesh model is registered into the target face mesh using a non-rigid ICP (iterative closest point) algorithm. Deformation gradients between the template and target neutral mesh are then transferred to each expression to form a new set of blend shapes for the target face. We solve optimization problem to consistently map the deformation of the source blend shapes to the target face model. The result is a new set of blend shapes for a target mesh having triangle-wise correspondences between the source face and target faces. After creating blend shapes, the blend shape animation of the source face is retargeted to the target mesh automatically.", "title": "" }, { "docid": "b78c38c6ac9809f46e3d73f90e60afc6", "text": "The INTERSPEECH 2012 Speaker Trait Challenge provides for the first time a unified test-bed for ‘perceived’ speaker traits: Personality in the five OCEAN personality dimensions, likability of speakers, and intelligibility of pathologic speakers. In this paper, we describe these three Sub-Challenges, Challenge conditions, baselines, and a new feature set by the openSMILE toolkit, provided to the participants.", "title": "" }, { "docid": "01ba1a2087b177895dceff8675e92bbb", "text": "The beer game is a widely used in-class game that is played in supply chain management classes to demonstrate the bullwhip effect. The game is a decentralized, multi-agent, cooperative problem that can be modeled as a serial supply chain network in which agents cooperatively attempt to minimize the total cost of the network even though each agent can only observe its own local information. Each agent chooses order quantities to replenish its stock. Under some conditions, a base-stock replenishment policy is known to be optimal. However, in a decentralized supply chain in which some agents (stages) may act irrationally (as they do in the beer game), there is no known optimal policy for an agent wishing to act optimally. We propose a machine learning algorithm, based on deep Q-networks, to optimize the replenishment decisions at a given stage. When playing alongside agents who follow a base-stock policy, our algorithm obtains near-optimal order quantities. It performs much better than a base-stock policy when the other agents use a more realistic model of human ordering behavior. Unlike most other algorithms in the literature, our algorithm does not have any limits on the beer game parameter values. Like any deep learning algorithm, training the algorithm can be computationally intensive, but this can be performed ahead of time; the algorithm executes in real time when the game is played. Moreover, we propose a transfer learning approach so that the training performed for one agent and one set of cost coefficients can be adapted quickly for other agents and costs. Our algorithm can be extended to other decentralized multi-agent cooperative games with partially observed information, which is a common type of situation in real-world supply chain problems.", "title": "" }, { "docid": "91bbea10b8df8a708b65947c8a8832dc", "text": "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.", "title": "" }, { "docid": "2a187505ea098a45b5a4da4f4a32e049", "text": "Combining information extraction systems yields significantly higher quality resources than each system in isolation. In this paper, we generalize such a mixing of sources and features in a framework called Ensemble Semantics. We show very large gains in entity extraction by combining state-of-the-art distributional and patternbased systems with a large set of features from a webcrawl, query logs, and Wikipedia. Experimental results on a webscale extraction of actors, athletes and musicians show significantly higher mean average precision scores (29% gain) compared with the current state of the art.", "title": "" }, { "docid": "5ba721a06c17731458ef1ecb6584b311", "text": "BACKGROUND\nPrimary and tension-free closure of a flap is often required after particular surgical procedures (e.g., guided bone regeneration). Other times, flap advancement may be desired for situations such as root coverage.\n\n\nMETHODS\nThe literature was searched for articles that addressed techniques, limitations, and complications associated with flap advancement. These articles were used as background information. In addition, reference information regarding anatomy was cited as necessary to help describe surgical procedures.\n\n\nRESULTS\nThis article describes techniques to advance mucoperiosteal flaps, which facilitate healing. Methods are presented for a variety of treatment scenarios, ranging from minor to major coronal tissue advancement. Anatomic landmarks are identified that need to be considered during surgery. In addition, management of complications associated with flap advancement is discussed.\n\n\nCONCLUSIONS\nTension-free primary closure is attainable. The technique is dependent on the extent that the flap needs to be advanced.", "title": "" }, { "docid": "1e9fe0b5da36281a24b1f6580113f5cf", "text": "The external load of a team-sport athlete can be measured by tracking technologies, including global positioning systems (GPS), local positioning systems (LPS), and vision-based systems. These technologies allow for the calculation of displacement, velocity and acceleration during a match or training session. The accurate quantification of these variables is critical so that meaningful changes in team-sport athlete external load can be detected. High-velocity running, including sprinting, may be important for specific team-sport match activities, including evading an opponent or creating a shot on goal. Maximal accelerations are energetically demanding and frequently occur from a low velocity during team-sport matches. Despite extensive research, conjecture exists regarding the thresholds by which to classify the high velocity and acceleration activity of a team-sport athlete. There is currently no consensus on the definition of a sprint or acceleration effort, even within a single sport. The aim of this narrative review was to examine the varying velocity and acceleration thresholds reported in athlete activity profiling. The purposes of this review were therefore to (1) identify the various thresholds used to classify high-velocity or -intensity running plus accelerations; (2) examine the impact of individualized thresholds on reported team-sport activity profile; (3) evaluate the use of thresholds for court-based team-sports and; (4) discuss potential areas for future research. The presentation of velocity thresholds as a single value, with equivocal qualitative descriptors, is confusing when data lies between two thresholds. In Australian football, sprint efforts have been defined as activity >4.00 or >4.17 m·s-1. Acceleration thresholds differ across the literature, with >1.11, 2.78, 3.00, and 4.00 m·s-2 utilized across a number of sports. It is difficult to compare literature on field-based sports due to inconsistencies in velocity and acceleration thresholds, even within a single sport. Velocity and acceleration thresholds have been determined from physical capacity tests. Limited research exists on the classification of velocity and acceleration data by female team-sport athletes. Alternatively, data mining techniques may be used to report team-sport athlete external load, without the requirement of arbitrary or physiologically defined thresholds.", "title": "" }, { "docid": "7a07901a9a850205c4365e4fb5d1ec1d", "text": "A wireless sensor network consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Different approaches have used for simulation and modeling of SN (Sensor Network) and WSN. Traditional approaches consist of various simulation tools based on different languages such as C, C++ and Java. In this paper, MATLAB (7.6) Simulink was used to build a complete WSN system. Simulation procedure includes building the hardware architecture of the transmitting nodes, modeling both the communication channel and the receiving master node architecture. Bluetooth was chosen to undertake the physical layer communication with respect to different channel parameters (i.e., Signal to Noise ratio, Attenuation and Interference). The simulation model was examined using different topologies under various conditions and numerous results were collected. This new simulation methodology proves the ability of the Simulink MATLAB to be a useful and flexible approach to study the effect of different physical layer parameters on the performance of wireless sensor networks.", "title": "" }, { "docid": "373c89beb40ce164999892be2ccb8f46", "text": "Recent advances in mobile technologies (esp., smart phones and tablets with built-in cameras, GPS and Internet access) made augmented reality (AR ) applications available for the broad public. While many researchers have examined the af fordances and constraints of AR for teaching and learning, quantitative evidence for it s effectiveness is still scarce. To contribute to filling this research gap, we designed and condu cted a pretest-posttest crossover field experiment with 101 participants at a mathematics exh ibition to measure the effect of AR on acquiring and retaining mathematical knowledge in a n informal learning environment. We hypothesized that visitors acquire more knowledge f rom augmented exhibits than from exhibits without AR. The theoretical rationale for our h ypothesis is that AR allows for the efficient and effective implementation of a subset of the des ign principles defined in the cognitive theory of multimedia. The empirical results we obtaine d show that museum visitors performed better on knowledge acquisition and retention tests related to augmented exhibits than to nonaugmented exhibits and that they perceived AR as a valuable and desirable add-on for museum exhibitions.", "title": "" }, { "docid": "e5a7acf6980c93c1d4fe91797a5c119f", "text": "Online algorithms that process one example at a time are advantageous when dealing with very large data or with data streams. Stochastic gradient descent (SGD) is such an algorithm and it is an attractive choice for online SVM training due to its simplicity and effectiveness. When equipped with kernel functions, similarly to other SVM learning algorithms, SGD is susceptible to “the curse of kernelization” that causes unbounded linear growth in model size and update time with data size. This may render SGD inapplicable to large data sets. We address this issue by presenting a class of Budgeted SGD (BSGD) algorithms for large-scale kernel SVM training which have constant space and time complexity per update. BSGD keeps the number of support vectors bounded during training through several budget maintenance strategies. We treat the budget maintenance as a source of the gradient error, and relate the gap between the BSGD and the optimal SVM solutions via the average model degradation due to budget maintenance. To minimize the gap, we study greedy budget maintenance methods based on removal, projection, and merging of support vectors. We propose budgeted versions of several popular online SVM algorithms that belong to the SGD family. We further derive BSGD algorithms for multi-class SVM training. Comprehensive empirical results show that BSGD achieves much higher accuracy than the state-of-the-art budgeted online algorithms and comparable to non-budget algorithms, while achieving impressive computational efficiency both in time and space during training and prediction.", "title": "" }, { "docid": "e5d6e50ebec7f16f60815267cbb834ae", "text": "Internet of Things is a futuristic vision of networked objects able to take smart decisions and to assist humans in their daily activities. The founding factor of such a vision is the capacity of objects to exchange data by means of wireless networks, and the applied communication pattern is event-driven and multicast. Publish/subscribe services are a promising communication technology for building solutions according to the Internet of Things' vision. The available solutions consist in the adaptation of widely-known products for publish/subscribe services taken from other contexts and application domains, without considering the key peculiarities of the Internet of Things. Our driving idea is to consider beaconing as the basic communication means, and to build a publish/subscribe service based on it. Our solution can be used stand-alone for event-driven communications and/or integrated within standardized protocols to provide basic communication capabilities. We present a preliminary set of experiments showing the efficiency of our solution.", "title": "" }, { "docid": "cc57f21666ece3c6ba7c9a28228a44c1", "text": "The past few years have seen rapid advances in communication and information technology (C&IT), and the pervasion of the worldwide web into everyday life has important implications for education. Most medical schools provide extensive computer networks for their students, and these are increasingly becoming a central component of the learning and teaching environment. Such advances bring new opportunities and challenges to medical education, and are having an impact on the way that we teach and on the way that students learn, and on the very design and delivery of the curriculum. The plethora of information available on the web is overwhelming, and both students and staff need to be taught how to manage it effectively. Medical schools must develop clear strategies to address the issues raised by these technologies. We describe how medical schools are rising to this challenge, look at some of the ways in which communication and information technology can be used to enhance the learning and teaching environment, and discuss the potential impact of future developments on medical education.", "title": "" }, { "docid": "e3b91b1133a09d7c57947e2cd85a17c7", "text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.", "title": "" }, { "docid": "7af1ddcefae86ffa989ddd106f032002", "text": "In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that “Some people are gay” is toxic while “Some people are straight” is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classifiers, and describe its relationship with group fairness. Further, we offer three approaches, blindness, counterfactual augmentation, and counterfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we find that blindness and CLP address counterfactual token fairness. The methods do not harm classifier performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classification.", "title": "" }, { "docid": "61ae61d0950610ee2ad5e07f64f9b983", "text": "We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.", "title": "" }, { "docid": "b338f9e213b1837e217e1969edf0aedf", "text": "In many applications today user interaction is moving away from mouse and pens and is becoming pervasive and much more physical and tangible. New emerging interaction technologies allow developing and experimenting with new interaction methods on the long way to providing intuitive human computer interaction. In this paper, we aim at recognizing gestures to interact with an application and present the design and evaluation of our sensor-based gesture recognition. As input device we employ the Wii-controller (Wiimote) which recently gained much attention world wide. We use the Wiimote's acceleration sensor independent of the gaming console for gesture recognition. The system allows the training of arbitrary gestures by users which can then be recalled for interacting with systems like photo browsing on a home TV. The developed library exploits Wii-sensor data and employs a hidden Markov model for training and recognizing user-chosen gestures. Our evaluation shows that we can already recognize gestures with a small number of training samples. In addition to the gesture recognition we also present our experiences with the Wii-controller and the implementation of the gesture recognition. The system forms the basis for our ongoing work on multimodal intuitive media browsing and are available to other researchers in the field.", "title": "" }, { "docid": "9d175a211ec3b0ee7db667d39c240e1c", "text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.", "title": "" } ]
scidocsrr
7580b3908370fb7e656615907a08440d
Antimicrobial Activity of Onion and Ginger against two Food Borne Pathogens Escherichia Coli and Staphylococcus Aureus
[ { "docid": "64e444903af3492a2fe9941d2663d7a2", "text": "Spices and herbs have been used as food additives since ancient times, as flavouring agents but also as natural food preservatives. A number of spices shows antimicrobial activity against different types of microorganisms. This article gives a literature review of recent investigations considering antimicrobial activity of essential oils widely used spices and herbs, such as garlic, mustard, cinnamon, cumin, clove, bay, thyme, basil, oregano, pepper, ginger, sage, rosemary etc., against most common bacteria and fungi that contaminate food (Listeria spp., Staphylococcus spp., Salmonella spp., Escherichia spp., Pseudomonas spp., Aspergillus spp., Cladosporium spp. and many others). Antimicrobial activity depends on the type of spice or herb, type of food and microorganism, as well as on the chemical composition and content of extracts and essential oils. Summarizing results of different investigations, relative antimicrobial effectiveness can be made, and it shows that cinnamon, cloves and mustrad have very strong antimicrobial potential, cumin, oregano, sage, thyme and rosemary show medium inhibitory effect, and spices such as pepper and ginger have weak inhibitory effect.", "title": "" } ]
[ { "docid": "b5009853d22801517431f46683b235c2", "text": "Artificial intelligence (AI) is the study of how to make computers do things which, at the moment, people do better. Thus Strong AI claims that in near future we will be surrounded by such kinds of machine which can completely works like human being and machine could have human level intelligence. One intention of this article is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.The science of Artificial Intelligence (AI) might be defined as the construction of intelligent systems and their analysis.", "title": "" }, { "docid": "b4ee855cc868dba8a3413e10dfb5ab46", "text": "Cluster analysis is an unsupervised learning method that constitutes a cornerstone of an intelligent data analysis process. Clustering categorical data is an important research area data mining. In this paper we propose a novel algorithm to cluster categorical data. Based on the minimum dissimilarity value objects are grouped into cluster. In the merging process, the objects are relocated using silhouette coefficient. Experimental results show that the proposed method is efficient.", "title": "" }, { "docid": "f2e62e761c357c8490f1b53f125f8f28", "text": "The credit crisis and the ongoing European sovereign debt crisis have highlighted the native form of credit risk, namely the counterparty risk. The related Credit Valuation Adjustment (CVA), Debt Valuation Adjustment (DVA), Liquidity Valuation Adjustment (LVA) and Replacement Cost (RC) issues, jointly referred to in this paper as Total Valuation Adjustment (TVA), have been thoroughly investigated in the theoretical papers Crépey (2012a, 2012b). The present work provides an executive summary and numerical companion to these papers, through which the TVA pricing problem can be reduced to Markovian pre-default TVA BSDEs. The first step consists in the counterparty clean valuation of a portfolio of contracts, which is the valuation in a hypothetical situation where the two parties would be risk-free and funded at a risk-free rate. In the second step, the TVA is obtained as the value of an option on the counterparty clean value process called Contingent Credit Default Swap (CCDS). Numerical results are presented for interest rate swaps in the Vasicek, as well as in the inverse Gaussian Hull-White short rate model, also allowing one to assess the related model risk issue.", "title": "" }, { "docid": "8d6171dbe50a25873bd435ad25e48ae9", "text": "An automatic landing system is required on a long-range drone because the position of the vehicle cannot be reached visually by the pilot. The autopilot system must be able to correct the drone movement dynamically in accordance with its flying altitude. The current article describes autopilot system on an H-Octocopter drone using image processing and complementary filter. This paper proposes a new approach to reduce oscillations during the landing phase on a big drone. The drone flies above 10 meters to a provided coordinate using GPS data, to check for the existence of the landing area. This process is done visually using the camera. PID controller is used to correct the movement by calculate error distance detected by camera. The controller also includes altitude parameters on its calculations through a complementary filter. The controller output is the PWM signals which control the movement and altitude of the vehicle. The signal then transferred to Flight Controller through serial communication, so that, the drone able to correct its movement. From the experiments, the accuracy is around 0.56 meters and it can be done in 18 seconds.", "title": "" }, { "docid": "851fd19525da9dc5a46e3146948109df", "text": "As computation becomes increasingly limited by data movement and energy consumption, exploiting locality throughout the memory hierarchy becomes critical for maintaining the performance scaling that many have come to expect from the computing industry. Moving computation closer to main memory presents an opportunity to reduce the overheads associated with data movement. We explore the potential of using 3D die stacking to move memory-intensive computations closer to memory. This approach to processing-in-memory addresses some drawbacks of prior research on in-memory computing and appears commercially viable in the foreseeable future. We show promising early results from this approach and identify areas that are in need of research to unlock its full potential.", "title": "" }, { "docid": "cbdace4636017f925b89ecf266fde019", "text": "It is traditionally known that wideband apertures lose bandwidth when placed over a ground plane. To overcome this issue, this paper introduces a new non-symmetric tightly coupled dipole element for wideband phased arrays. The proposed array antenna incorporates additional degrees of freedom to control capacitance and cancel the ground plane inductance. Specifically, each arm on the dipole is different than the other (or non-symmetric). The arms are identical near the center feed section but dissimilar towards the ends, forming a ball-and-cup. It is demonstrated that the non-symmetric qualities achieve wideband performance. Concurrently, a design example for planar installation with balun and matching network is presented to cover X-band. The balun avoids extraneous radiation, maintains the array's low-profile height and is printed on top of the ground plane connecting to the array aperture with 180° out of phase vertical twin-wire transmission lines. To demonstrate the concept, a 64-element array with integrated feed and matching network is designed, fabricated and verified experimentally. The array aperture is placed λ/7 (at 8 GHz) above the ground plane and shown to maintain a active VSWR less than 2 from 8-12.5 GHz while scanning up to 70° and 60° in E- and H-plane, respectively. The array's simulated diagonal plane cross-polarization is approximately 10 dB below the co-polarized component during 60° diagonal scan and follows the theoretical limit for an infinite current sheet.", "title": "" }, { "docid": "a3afea380667f2f088f37ae9127fb05a", "text": "This paper presents a new distributed approach to detecting DDoS (distributed denial of services) flooding attacks at the traffic-flow level The new defense system is suitable for efficient implementation over the core networks operated by Internet service providers (ISPs). At the early stage of a DDoS attack, some traffic fluctuations are detectable at Internet routers or at the gateways of edge networks. We develop a distributed change-point detection (DCD) architecture using change aggregation trees (CAT). The idea is to detect abrupt traffic changes across multiple network domains at the earliest time. Early detection of DDoS attacks minimizes the floe cling damages to the victim systems serviced by the provider. The system is built over attack-transit routers, which work together cooperatively. Each ISP domain has a CAT server to aggregate the flooding alerts reported by the routers. CAT domain servers collaborate among themselves to make the final decision. To resolve policy conflicts at different ISP domains, a new secure infrastructure protocol (SIP) is developed to establish mutual trust or consensus. We simulated the DCD system up to 16 network domains on the Cyber Defense Technology Experimental Research (DETER) testbed, a 220-node PC cluster for Internet emulation experiments at the University of Southern California (USC) Information Science Institute. Experimental results show that four network domains are sufficient to yield a 98 percent detection accuracy with only 1 percent false-positive alarms. Based on a 2006 Internet report on autonomous system (AS) domain distribution, we prove that this DDoS defense system can scale well to cover 84 AS domains. This security coverage is wide enough to safeguard most ISP core networks from real-life DDoS flooding attacks.", "title": "" }, { "docid": "72aaa1dc7bdffa25c884ebbe4acf671d", "text": "BACKGROUND\nAnkylosing spondylitis (AS) can cause severe functional disorders that lead to loss of balance.\n\n\nOBJECTIVE\nThe aim of this study was to investigate the effects of balance and postural stability exercises on spa based rehabilitation programme in AS subjects.\n\n\nMETHODS\nTwenty-one participants were randomized to the study (n= 11) and control groups (n= 10). Patients balance and stability were assessed with the Berg Balance Scale (BBS), Timed Up and Go (TUG) Test, Single Leg Stance Test (SLST) and Functional Reach Test (FRT). AS spesicied measures were used for assessing to other parameters. The treatment plan for both groups consisted of conventional transcutaneous electrical nerve stimulation (TENS), spa and land-based exercises 5 days per week for 3 weeks. The study group performed exercises based on postural stability and balance with routine physiotherapy practice in thermal water and in exercise room.\n\n\nRESULTS\nThe TUG, SLST and FUT scores were significantly increased in the study group. In both groups, the BASMI, BASFI, BASDAI and ASQoL scores decreased significantly by the end of the treatment period (p< 0.05).\n\n\nCONCLUSIONS\nIn AS rehabilitation, performing balance and stability exercises in addition to spa based routine approaches can increase the duration of maintaining balance and can improve the benefits of physiotherapy.", "title": "" }, { "docid": "c890c635dd0f2dcb6827f59707b5dcd4", "text": "We presenttwo families of reflective surfacesthat are capableof providing a wide field of view, andyet still approximatea perspecti ve projectionto a high degree.These surfacesarederivedby consideringaplaneperpendicular to theaxisof a surfaceof revolutionandfinding theequations governingthedistortionof theimageof theplanein thissurface. We thenview this relationasa differentialequation and prescribethe distortion term to be linear. By choosing appropriateinitial conditionsfor the differentialequation andsolvingit numerically, wederivethesurfaceshape andobtaina preciseestimateasto what degreethe resulting sensorcanapproximatea perspecti ve projection.Thus thesesurfacesactascomputational sensors, allowing for a wide-angleperspecti ve view of a scenewithout processing the imagein software. The applicationsof sucha sensor shouldbe numerous,including surveillance,roboticsand traditionalphotography . Recently, many researchersin the roboticsand vision communityhave begun to considervisual sensorsthat are ableto obtainwidefieldsof view. Suchdevicesarethenatural solution to variousdifficulties encounteredwith conventionalimagingsystems. Thetwo mostcommonmeansof obtainingwidefieldsof view arefish-eye lensesandreflectivesurfaces,alsoknown ascatoptrics.Whencatoptricsarecombinedwith conventional lenssystems,known asdioptrics,the resultingsensors are known as catadioptrics. The possibleusesof thesesystemsincludeapplicationssuchasrobotcontroland surveillance. In this paperwe will consideronly catadioptric basedsensors.Oftensuchsystemsconsistof a camera pointingataconvex mirror, asin figure(1). How to interpretand make useof the visual information obtainedby suchsystems,e.g. how they shouldbe usedto control robots,is not at all obvious. Thereareinfinitely many differentshapesthat a mirror canhave, and at leasttwo differentcameramodels(perspecti ve and orthographicprojection)with which to combineeachmirror. Convex mirror", "title": "" }, { "docid": "b0b4b48185bf55b6f5542faf7d80563b", "text": "Natural languages have words for all the operators of first-order logic, modal logic, and many logics that have yet to be invented. They also have words and phrases for everything that anyone has ever discovered, assumed, or imagined. Aristotle invented formal logic as a tool (organon) for analyzing and reasoning about the ontologies implicit in language. Yet some linguists and logicians took a major leap beyond Aristotle: they claimed that there exists a special kind of logic at the foundation of all NLs, and the discovery of that logic would be the key to harnessing their power and implementing them in computer systems. Projects in artificial intelligence developed large systems based on complex versions of logic, yet those systems are fragile and limited in comparison to the robust and immensely expressive natural languages. Formal logics are too inflexible to be the foundation for language; instead, logic and ontology are abstractions from language. This reversal turns many theories about language upside down, and it has profound implications for the design of automated systems for reasoning and language understanding. This article analyzes these issues in terms of Peirce’s semiotics and Wittgenstein’s language games. The resulting analysis leads to a more dynamic, flexible, and extensible basis for ontology and its use in formal and informal reasoning. This article is a slightly revised preprint of Chapter 11 in Theory and Applications of Ontology: Philosophical Perspectives, edited by R. Poli & J. Seibt, Berlin: Springer, pp. 231-263. 1. The Search for Foundations Natural languages are the most sophisticated systems of communication ever developed. Formal ontologies are valuable for applications in science, engineering, and business, but they have been difficult to generalize beyond narrowly defined microtheories for specialized domains. For language understanding, formal systems have only been successful in narrow domains, such as weather reports and airline reservations. Frege, Russell, and the Vienna Circle tried to make formal logic the universal language of science, but that attempt failed. Only the final results of any research can be stated formally, never the vague hunches, intuitive explorations, and heated debates that are necessary for any creative advance. Scientists and engineers criticized formal methods with the pithy slogans “Physicists don’t do axioms” and “All models are wrong, but some are useful.” Even Aristotle, who invented the first formal logic, admitted that his syllogisms and categories are important for stating the results of research, but that informal methods are necessary for gathering and interpreting empirical evidence. Aristotle’s logic and categories still serve as a paradigm for the ontologies used in modern computer systems, but his grand synthesis began to break down in the 16th century. Aristotle’s physics and cosmology were demolished by the work of Copernicus, Galileo, Kepler, and Newton. In philosophy, the skeptical tradition of antiquity was revived by the publication in 1562 of a new edition of the works of Sextus Empiricus, whose attacks on Aristotle were popularized by the essays of Michel de Montaigne. In responding to the skeptics, Descartes began his search for certainty from the standpoint of universal doubt, but he merely reinforced the corrosive effects of skepticism. The British empiricists responded with new approaches to epistemology, which culminated in Hume’s devastating criticisms of the foundations of science itself. Two responses to Hume helped to restore the legitimacy of science: Thomas Reid’s critical common sense and Immanuel Kant’s three major Critiques. Kant (1787) adopted Aristotle’s logic as the basis for his new system of categories, which he claimed would be sufficient for defining all other concepts: If one has the original and primitive concepts, it is easy to add the derivative and subsidiary, and thus give a complete picture of the family tree of the pure understanding. Since at present, I am concerned not with the completeness of the system, but only with the principles to be followed, I leave this supplementary work for another occasion. It can easily be carried out with the aid of the ontological manuals. (A:82, B:108) Two centuries later, Kant’s “easy” task is still unfinished. His Opus postumum records the struggles of the last decade of his life when Kant tried to make a transition from his a priori metaphysics to the experimental evidence of physics. Förster (2000) wrote “although Kant began this manuscript in order to solve a comparatively minor problem within his philosophy, his reflections soon forced him to readdress virtually all the key problems of his critical philosophy: the objective validity of the categories, the dynamical theory of matter, the nature of space and time, the refutation of idealism, the theory of the self and its agency, the question of living organisms, the doctrine of practical postulates and the idea of God, the unity of theoretical and practical reason, and, finally, the idea of transcendental philosophy itself.” Unlike Aristotle, who used logic as a tool for analyzing language, Kant assumed that logic is a prerequisite, not only for language, but for all rational thought. Richard Montague (1970) pushed Kant’s assumption to an extreme: “I reject the contention that an important theoretical difference exists between formal and natural languages.” That assumption, acknowledged or not, motivated much of the research in artificial intelligence and formal linguistics. The resulting systems are theoretically impressive, but they cannot learn and use ordinary language with the ease and flexibility of a threeyear-old child. But if logic is inadequate, what other foundation could support linguistics and AI? What kind of semantics could represent the highly technical formalisms of science, the colloquial speech of everyday life, and the requirements for sharing and reasoning with the knowledge scattered among millions of computers across the Internet? How would the research and development change under the assumption that logic is a derivative from language, not a prerequisite for it? One major change would be a shift in emphasis from the rigid views of Frege, Russell, and Carnap to the more flexible philosophies of Peirce, Whitehead, and the later Wittgenstein. As logicians, those two groups were equally competent, but the former considered logic to be superior to language, while the latter recognized the limitations of logic and the power of language. Peirce, in particular, was a pioneer in logic, but he included logic within the broader field of semiotics, or as he spelled it, semeiotic. That broader view relates the precise formalisms of grammar and logic to the more primitive, yet more flexible mechanisms of perception, action, and learning. Language is based on vocal signs and patterns of signs, whose more stable forms are classified as vocabulary and grammar. But instead of starting with formal precision, the first signs are vague, ambiguous, and uncertain. Precision is a rare state that never occurs in the early stages of learning, and absolute precision is unattainable in any semiotic system that represents the real world. Grammar, logic, and ontology describe stable patterns of signs or invariants under transformations of perspective. Those stable patterns, which are fossilized in formal theories, develop as each individual interacts with the world and other creatures in it. Although formal logic can be studied independently of natural language semantics, no formal ontology that has any practical application can ever be developed and used without acknowledging its intimate connection with NL semantics. An ontology for medical informatics, for example, must be related to medical publications, to a physician’s diagnoses, and to the discussions among general practitioners, specialists, nurses, patients, and the programmers who develop the software they use. All these people are constantly thinking and using NL semantics, not the formal axioms of some theory. Frege (1879) hoped “to break the domination of the word over the human spirit by laying bare the misconceptions that through the use of language often almost unavoidably arise concerning the relations between concepts.” Wittgenstein agreed that language can be misleading, but he denied that an artificial language could be better. At best, it would be a different language game (Sprachspiel). These philosophical observations explain why large knowledge bases such as Cyc (Lenat 1995) have failed to achieve true artificial intelligence. An inference engine attached to a large collection of formally defined facts and axioms can prove theorems more efficiently than most people, but it lacks the flexibility of a child in learning new information and adapting old information to new situations (Sowa 2005). Two computer scientists who devoted their careers to different aspects of AI have concluded that the goal of a fixed formal ontology of everything is both unattainable and misguided. Alan Bundy, who developed formal methods for theorem proving and problem solving, proposed ontology evolution as a method for systematically relating smaller domain ontologies and adapting them to specific problems (Bundy & McNeill 2006; Bundy 2007). Yorick Wilks, who developed informal methods of preference semantics for natural language processing, maintained that the lexical resources used in language analysis and interpretation are sharply distinct from and should not be confused with formal ontologies (Wilks 2006, 2008a,b). These two views can be reconciled by using linguistic information as the basis for indexing and relating an open-ended variety of task-oriented ontologies. Instead of a static ontology, this article develops a dynamic approach that can relate the often vague and shifting meanings of ordinary words to the formal ontologies needed for computer appli", "title": "" }, { "docid": "b00c0aac81e8c0fe804268178afb98ed", "text": "Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.", "title": "" }, { "docid": "9c533c7059640ef502a75df36d310a91", "text": "Reference phylogenies are crucial for providing a taxonomic framework for interpretation of marker gene and metagenomic surveys, which continue to reveal novel species at a remarkable rate. Greengenes is a dedicated full-length 16S rRNA gene database that provides users with a curated taxonomy based on de novo tree inference. We developed a ‘taxonomy to tree’ approach for transferring group names from an existing taxonomy to a tree topology, and used it to apply the Greengenes, National Center for Biotechnology Information (NCBI) and cyanoDB (Cyanobacteria only) taxonomies to a de novo tree comprising 408 315 sequences. We also incorporated explicit rank information provided by the NCBI taxonomy to group names (by prefixing rank designations) for better user orientation and classification consistency. The resulting merged taxonomy improved the classification of 75% of the sequences by one or more ranks relative to the original NCBI taxonomy with the most pronounced improvements occurring in under-classified environmental sequences. We also assessed candidate phyla (divisions) currently defined by NCBI and present recommendations for consolidation of 34 redundantly named groups. All intermediate results from the pipeline, which includes tree inference, jackknifing and transfer of a donor taxonomy to a recipient tree (tax2tree) are available for download. The improved Greengenes taxonomy should provide important infrastructure for a wide range of megasequencing projects studying ecosystems on scales ranging from our own bodies (the Human Microbiome Project) to the entire planet (the Earth Microbiome Project). The implementation of the software can be obtained from http://sourceforge.net/projects/tax2tree/.", "title": "" }, { "docid": "8d7af01e003961cbf2a473abe32d8b7e", "text": "This paper presents a series of control strategies for soft compliant manipulators. We provide a novel approach to control multi-fingered tendon-driven foam hands using a CyberGlove and a simple ridge regression model. The results achieved include complex posing, dexterous grasping and in-hand manipulations. To enable efficient data sampling and a more intuitive design process of foam robots, we implement and evaluate a finite element based simulation. The accuracy of this model is evaluated using a Vicon motion capture system. We then use this simulation to solve inverse kinematics and compare the performance of supervised learning, reinforcement learning, nearest neighbor and linear ridge regression methods in terms of their accuracy and sample efficiency.", "title": "" }, { "docid": "45d6563b2b4c64bb11ad65c3cff0d843", "text": "The performance of single cue object tracking algorithms may degrade due to complex nature of visual world and environment challenges. In recent past, multicue object tracking methods using single or multiple sensors such as vision, thermal, infrared, laser, radar, audio, and RFID are explored to a great extent. It was acknowledged that combining multiple orthogonal cues enhance tracking performance over single cue methods. The aim of this paper is to categorize multicue tracking methods into single-modal and multi-modal and to list out new trends in this field via investigation of representative work. The categorized works are also tabulated in order to give detailed overview of latest advancement. The person tracking datasets are analyzed and their statistical parameters are tabulated. The tracking performance measures are also categorized depending upon availability of ground truth data. Our review gauges the gap between reported work and future demands for object tracking.", "title": "" }, { "docid": "a0589d0c1df89328685bdabd94a1a8a2", "text": "We present a translation of §§160–166 of Dedekind’s Supplement XI to Dirichlet’s Vorlesungen über Zahlentheorie, which contain an investigation of the subfields of C. In particular, Dedekind explores the lattice structure of these subfields, by studying isomorphisms between them. He also indicates how his ideas apply to Galois theory. After a brief introduction, we summarize the translated excerpt, emphasizing its Galois-theoretic highlights. We then take issue with Kiernan’s characterization of Dedekind’s work in his extensive survey article on the history of Galois theory; Dedekind has a nearly complete realization of the modern “fundamental theorem of Galois theory” (for subfields of C), in stark contrast to the picture presented by Kiernan at points. We intend a sequel to this article of an historical and philosophical nature. With that in mind, we have sought to make Dedekind’s text accessible to as wide an audience as possible. Thus we include a fair amount of background and exposition.", "title": "" }, { "docid": "7c8d5da89424dfba8fc84c7cb4f36856", "text": "Advances in sensor data collection technology, such as pervasive and embedded devices, and RFID Technology have lead to a large number of smart devices which are connected to the net and continuously transmit their data over time. It has been estimated that the number of internet connected devices has overtaken the number of humans on the planet, since 2008. The collection and processing of such data leads to unprecedented challenges in mining and processing such data. Such data needs to be processed in real-time and the processing may be highly distributed in nature. Even in cases, where the data is stored offline, the size of the data is often so large and distributed, that it requires the use of big data analytical tools for processing. In addition, such data is often sensitive, and brings a number of privacy challenges associated 384 MANAGING AND MINING SENSOR DATA with it. This chapter will discuss a data analytics perspective about mining and managing data associated with this phenomenon, which is now known as the internet of things.", "title": "" }, { "docid": "0293a868dcbe113145459f5708c0526c", "text": "Digital forensics has become a critical part of almost every investigation, and users of digital forensics tools are becoming more diverse in their backgrounds and interests. As a result, usability is an important aspect of these tools. This paper examines the usability aspect of forensics tools through interviews and surveys designed to obtain feedback from professionals using these tools as part of their regularly assigned duties. The study results highlight a number of usability issues that need to be taken into consideration when designing and implementing digital forensics tools.", "title": "" }, { "docid": "45022fe83fcd90ba6b63f3382791df7b", "text": "In this paper, we present a generative sketch model for human hair analysis and synthesis. We treat hair images as 2D piecewise smooth vector (flow) fields and, thus, our representation is view-based in contrast to the physically-based 3D hair models in graphics. The generative model has three levels. The bottom level is the high-frequency band of the hair image. The middle level is a piecewise smooth vector field for the hair orientation, gradient strength, and growth directions. The top level is an attribute sketch graph for representing the discontinuities in the vector field. A sketch graph typically has a number of sketch curves which are divided into 11 types of directed primitives. Each primitive is a small window (say 5 times 7 pixels) where the orientations and growth directions are defined in parametric forms, for example, hair boundaries, occluding lines between hair strands, dividing lines on top of the hair, etc. In addition to the three level representation, we model the shading effects, i.e., the low-frequency band of the hair image, by a linear superposition of some Gaussian image bases and we encode the hair color by a color map. The inference algorithm is divided into two stages: 1) We compute the undirected orientation field and sketch graph from an input image and 2) we compute the hair growth direction for the sketch curves and the orientation field using a Swendsen-Wang cut algorithm. Both steps maximize a joint Bayesian posterior probability. The generative model provides a straightforward way for synthesizing realistic hair images and stylistic drawings (rendering) from a sketch graph and a few Gaussian bases. The latter can be either inferred from a real hair image or input (edited) manually using a simple sketching interface. We test our algorithm on a large data set of hair images with diverse hair styles. Analysis, synthesis, and rendering results are reported in the experiments", "title": "" }, { "docid": "4a65fcbc395eab512d8a7afe33c0f5ae", "text": "In eukaryotes, the spindle-assembly checkpoint (SAC) is a ubiquitous safety device that ensures the fidelity of chromosome segregation in mitosis. The SAC prevents chromosome mis-segregation and aneuploidy, and its dysfunction is implicated in tumorigenesis. Recent molecular analyses have begun to shed light on the complex interaction of the checkpoint proteins with kinetochores — structures that mediate the binding of spindle microtubules to chromosomes in mitosis. These studies are finally starting to reveal the mechanisms of checkpoint activation and silencing during mitotic progression.", "title": "" }, { "docid": "786d1ba82d326370684395eba5ef7cd3", "text": "A miniaturized dual-band Wilkinson power divider with a parallel LC circuit at the midpoints of two coupled-line sections is proposed in this paper. General design equations for parallel inductor L and capacitor C are derived from even- and odd-mode analysis. Generally speaking, characteristic impedances between even and odd modes are different in two coupled-line sections, and their electrical lengths are also different in inhomogeneous medium. This paper proved that a parallel LC circuit compensates for the characteristic impedance differences and the electrical length differences for dual-band operation. In other words, the proposed model provides self-compensation structure, and no extra compensation circuits are needed. Moreover, the upper limit of the frequency ratio range can be adjusted by two coupling strengths, where loose coupling for the first coupled-line section and tight coupling for the second coupled-line section are preferred for a wider frequency ratio range. Finally, an experimental circuit shows good agreement with the theoretical simulation.", "title": "" } ]
scidocsrr
fbf349ad6aaf78845e181efd3d22bc0f
Super Normal Vector for Human Activity Recognition with Depth Cameras
[ { "docid": "e10dbbc6b3381f535ff84a954fcc7c94", "text": "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of Shotton et al. [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skeletal representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skeletal representation lies in the Lie group SE(3)×.. .×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skeletal representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "title": "" }, { "docid": "2a56702663e6e52a40052a5f9b79a243", "text": "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.", "title": "" }, { "docid": "2b6c016395d92ef20c4e316a35a7ecb8", "text": "Recently, the low-cost Microsoft Kinect sensor, which can capture real-time high-resolution RGB and depth visual information, has attracted increasing attentions for a wide range of applications in computer vision. Existing techniques extract hand-tuned features from the RGB and the depth data separately and heuristically fuse them, which would not fully exploit the complementarity of both data sources. In this paper, we introduce an adaptive learning methodology to automatically extract (holistic) spatio-temporal features, simultaneously fusing the RGB and depth information, from RGBD video data for visual recognition tasks. We address this as an optimization problem using our proposed restricted graph-based genetic programming (RGGP) approach, in which a group of primitive 3D operators are first randomly assembled as graph-based combinations and then evolved generation by generation by evaluating on a set of RGBD video samples. Finally the best-performed combination is selected as the (near-)optimal representation for a pre-defined task. The proposed method is systematically evaluated on a new hand gesture dataset, SKIG, that we collected ourselves and the public MSRDailyActivity3D dataset, respectively. Extensive experimental results show that our approach leads to significant advantages compared with state-of-the-art handcrafted and machine-learned features.", "title": "" } ]
[ { "docid": "79d6aa27e761b25348481ffed15a8bd9", "text": "Correlation filter (CF) based trackers have recently gained a lot of popularity due to their impressive performance on benchmark datasets, while maintaining high frame rates. A significant amount of recent research focuses on the incorporation of stronger features for a richer representation of the tracking target. However, this only helps to discriminate the target from background within a small neighborhood. In this paper, we present a framework that allows the explicit incorporation of global context within CF trackers. We reformulate the original optimization problem and provide a closed form solution for single and multi-dimensional features in the primal and dual domain. Extensive experiments demonstrate that this framework significantly improves the performance of many CF trackers with only a modest impact on frame rate.", "title": "" }, { "docid": "2e2ee64b0e2d18fff783d67fade3f9b3", "text": "This paper discusses some aspects of selecting and testing random and pseudorandom number generators. The outputs of such generators may be used in many cryptographic apphcations, such as the generation of key material. Generators suitable for use in cryptographic applications may need to meet stronger requirements than for other applications. In particular, their outputs must be unpredictable in the absence of knowledge of the inputs. Some criteria for characterizing and selecting appropriate generators are discussed in this document. The subject of statistical testing and its relation to cryptanalysis is also discussed, and some recommended statistical tests are provided. These tests may be useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. However, no set of statistical tests can absolutely certify a generator as appropriate for usage in a particular application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The design and cryptanalysis of generators is outside the scope of this paper.", "title": "" }, { "docid": "9361c6eaa2faaa3cfebc4a073ee8f3d3", "text": "In this paper we present the analysis of two large-scale network file system workloads. We measured CIFS traffic for two enterprise-class file servers deployed in the NetApp data center for a three month period. One file server was used by marketing, sales, and finance departments and the other by the engineering department. Together these systems represent over 22 TB of storage used by over 1500 employees, making this the first ever large-scale study of the CIFS protocol. We analyzed how our network file system workloads compared to those of previous file system trace studies and took an in-depth look at access, usage, and sharing patterns. We found that our workloads were quite different from those previously studied; for example, our analysis found increased read-write file access patterns, decreased read-write ratios, more random file access, and longer file lifetimes. In addition, we found a number of interesting properties regarding file sharing, file re-use, and the access patterns of file types and users, showing that modern file system workload has changed in the past 5–10 years. This change in workload characteristics has implications on the future design of network file systems, which we describe in the paper.", "title": "" }, { "docid": "fc04f9bd523e3d2ca57ab3a8e730397b", "text": "Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.", "title": "" }, { "docid": "e0fb10bf5f0206c8cf3f97f5daa33fc0", "text": "Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV.", "title": "" }, { "docid": "b4dd6c9634e86845795bcbe32216ee44", "text": "Several program analysis tools - such as plagiarism detection and bug finding - rely on knowing a piece of code's relative semantic importance. For example, a plagiarism detector should not bother reporting two programs that have an identical simple loop counter test, but should report programs that share more distinctive code. Traditional program analysis techniques (e.g., finding data and control dependencies) are useful, but do not say how surprising or common a line of code is. Natural language processing researchers have encountered a similar problem and addressed it using an n-gram model of text frequency, derived from statistics computed over text corpora.\n We propose and compute an n-gram model for programming languages, computed over a corpus of 2.8 million JavaScript programs we downloaded from the Web. In contrast to previous techniques, we describe a code n-gram as a subgraph of the program dependence graph that contains all nodes and edges reachable in n steps from the statement. We can count n-grams in a program and count the frequency of n-grams in the corpus, enabling us to compute tf-idf-style measures that capture the differing importance of different lines of code. We demonstrate the power of this approach by implementing a plagiarism detector with accuracy that beats previous techniques, and a bug-finding tool that discovered over a dozen previously unknown bugs in a collection of real deployed programs.", "title": "" }, { "docid": "f0d3ab8a530d7634149a5c29fa8bfe1b", "text": "In this paper, a novel broadband dual-polarized (slant ±45°) base station antenna element operating at 790–960 MHz is proposed. The antenna element consists of two pairs of symmetrical dipoles, four couples of baluns, a cricoid pedestal and two kinds of plastic fasteners. Specific shape metal reflector is also designed to achieve stable radiation pattern and high front-to-back ratio (FBR). All the simulated and measured results show that the proposed antenna element has wide impedance bandwidth (about 19.4%), low voltage standing wave ratio (VSWR < 1.4) and high port to port isolation (S21 < −25 dB) at the whole operating frequency band. Stable horizontal half-power beam width (HPBW) with 65°±4.83° and high gain (> 9.66 dBi) are also achieved. The proposed antenna element fabricated by integrated metal casting technology has great mechanical properties such as compact structure, low profile, good stability, light weight and easy to fabricate. Due to its good electrical and mechanical characteristics, the antenna element is suitable for European Digital Dividend, CDMA800 and GSM900 bands in base station antenna of modern mobile communication.", "title": "" }, { "docid": "f7e4c0300f1483883956be3cb5ccc174", "text": "Despite of the fact that graph-based methods are gaining more and more popularity in different scientific areas, it has to be considered that the choice of an appropriate algorithm for a given application is still the most crucial task. The lack of a large database of graphs makes the task of comparing the performance of different graph matching algorithms difficult, and often the selection of an algorithm is made on the basis of a few experimental results available. In this paper we present an experimental comparative evaluation of the performance of four graph matching algorithms. In order to perform this comparison, we have built and made available a large database of graphs, which is also described in detail in this article. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "b80dd3b00935f35fc214771ec8adbd98", "text": "We have reviewed the research of context awareness of recent years in this paper. First of all, we describe the definitions of context and context awareness which are made by different scholars in different fields, and propose our own definition. In addition, different classifications of context and context awareness are analyzed. We focus on context acquisition and sensing, context modeling and representation, context filtering and fusion, context storage and retrieval in context awareness computing. Finally, the context awareness application of domestic and foreign are discussed, and the development of the context awareness is prospected.", "title": "" }, { "docid": "0f0305afce53933df1153af6a31c09fb", "text": "In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features-point and line segments-have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.", "title": "" }, { "docid": "b9d8ea80169ac5a5c48fd631c9d5625a", "text": "Deep convolutional networks have achieved great success for image recognition. However, for action recognition in videos, their advantage over traditional methods is not so evident. We present a general and flexible video-level framework for learning action models in videos. This method, called temporal segment network (TSN), aims to model long-range temporal structures with a new segment-based sampling and aggregation module. This unique design enables our TSN to efficiently learn action models by using the whole action videos. The learned models could be easily adapted for action recognition in both trimmed and untrimmed videos with simple average pooling and multi-scale temporal window integration, respectively. We also study a series of good practices for the instantiation of TSN framework given limited training samples. Our approach obtains the state-the-of-art performance on four challenging action recognition benchmarks: HMDB51 (71.0%), UCF101 (94.9%), THUMOS14 (80.1%), and ActivityNet v1.2 (89.6%). Using the proposed RGB difference for motion models, our method can still achieve competitive accuracy on UCF101 (91.0 %) while running at 340 FPS. Furthermore, based on the temporal segment networks, we won the video classification track at the ActivityNet challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and the proposed good practices.", "title": "" }, { "docid": "f8ce53fda72962da3595c2a06398e9c8", "text": "Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons.1", "title": "" }, { "docid": "947f17970a81ebc4e8c780b1291aa474", "text": "Minimally invasive total hip arthroplasty (THA) is claimed to be superior to the standard technique, due to the potential reduction of soft tissue damage via a smaller and tissue-sparing approach. As a result of the lack of objective evidence of fewer muscle and tendon defects, controversy still remains as to whether minimally invasive total hip arthroplasty truly minimizes muscle and tendon damage. Therefore, the objective was to compare the influence of the surgical approach on abductor muscle trauma and to analyze the relevance to postoperative pain and functional recovery. Between June 2006 and July 2007, 44 patients with primary hip arthritis were prospectively included in the study protocol. Patients underwent cementless unilateral total hip arthroplasty either through a minimally invasive anterolateral approach (ALMI) (n = 21) or a modified direct lateral approach (mDL) (n = 16). Patients were evaluated clinically and underwent MR imaging preoperatively and at 3 and 12 months postoperatively. Clinical assessment contained clinical examination, performance of abduction test and the survey of a function score using the Harris Hip Score, a pain score using a numeric rating scale (NRS) of 0–10, as well as a satisfaction score using an NRS of 1–6. Additionally, myoglobin and creatine kinase were measured preoperatively, and 6, 24 and 96 h postoperatively. Evaluation of the MRI images included fatty atrophy (rating scale 0–4), tendon defects (present/absent) and bursal fluid collection of the abductor muscle. Muscle and tendon damage occurred in both groups, but more lateral gluteus medius tendon defects [mDL 3/12mth.: 6 (37%)/4 (25%); ALMI: 3 (14%)/2 (9%)] and muscle atrophy in the anterior part of the gluteus medius [mean-standard (12): 1.75 ± 1.8; mean-MIS (12): 0.98 ± 1.1] were found in patients with the mDL approach. The clinical outcome was also poorer compared to the ALMI group. Significantly, more Trendelenburg’s signs were evident and lower clinical scores were achieved in the mDL group. No differences in muscle and tendon damage were found for the gluteus minimus muscle. A higher serum myoglobin concentration was measured 6 and 24 h postoperatively in the mDL group (6 h: 403 ± 168 μg/l; 24 h: 304 ± 182 μg/l) compared to the ALMI group (6 h: 331 ± 143 μg/l; 24 h: 268 ± 145 μg/l). Abductor muscle and tendon damage occurred in both approaches, but the gluteus medius muscle can be spared more successfully via the minimally invasive approach and is accompanied by a better clinical outcome. Therefore, going through the intermuscular plane, without any detachment or dissection of muscle and tendons, truly minimizes perioperative soft tissue trauma. Furthermore, MRI emerges as an important imaging modality in the evaluation of muscle trauma in THA.", "title": "" }, { "docid": "1ad08b9ecc0a08f5e0847547c55ea90d", "text": "Text summarization is the process of creating a shorter version of one or more text documents. Automatic text summarization has become an important way of finding relevant information in large text libraries or in the Internet. Extractive text summarization techniques select entire sentences from documents according to some criteria to form a summary. Sentence scoring is the technique most used for extractive text summarization, today. Depending on the context, however, some techniques may yield better results than some others. This paper advocates the thesis that the quality of the summary obtained with combinations of sentence scoring methods depend on text subject. Such hypothesis is evaluated using three different contexts: news, blogs and articles. The results obtained show the validity of the hypothesis formulated and point at which techniques are more effective in each of those contexts studied.", "title": "" }, { "docid": "8bba758fac60ce1139b7a6809bbe3efd", "text": "BACKGROUND\nYoung women with polycystic ovary syndrome (PCOS) have a high risk of developing endometrial carcinoma. There is a need for the development of new medical therapies that can reduce the need for surgical intervention so as to preserve the fertility of these patients. The aim of the study was to describe and discuss cases of PCOS and insulin resistance (IR) women with early endometrial carcinoma while being co-treated with Diane-35 and metformin.\n\n\nMETHODS\nFive PCOS-IR women who were scheduled for diagnosis and therapy for early endometrial carcinoma were recruited. The hospital records and endometrial pathology reports were reviewed. All patients were co-treated with Diane-35 and metformin for 6 months to reverse the endometrial carcinoma and preserve their fertility. Before, during, and after treatment, endometrial biopsies and blood samples were obtained and oral glucose tolerance tests were performed. Endometrial pathology was evaluated. Body weight (BW), body mass index (BMI), follicle-stimulating hormone (FSH), luteinizing hormone (LH), total testosterone (TT), sex hormone-binding globulin (SHBG), free androgen index (FAI), insulin area under curve (IAUC), and homeostasis model assessment of insulin resistance (HOMA-IR) were determined.\n\n\nRESULTS\nClinical stage 1a, low grade endometrial carcinoma was confirmed before treatment. After 6 months of co-treatment, all patients showed normal epithelia. No evidence of atypical hyperplasia or endometrial carcinoma was found. Co-treatment resulted in significant decreases in BW, BMI, TT, FAI, IAUC, and HOMA-IR in parallel with a significant increase in SHBG. There were no differences in the FSH and LH levels after co-treatment.\n\n\nCONCLUSIONS\nCombined treatment with Diane-35 and metformin has the potential to revert the endometrial carcinoma into normal endometrial cells in PCOS-IR women. The cellular and molecular mechanisms behind this effect merit further investigation.", "title": "" }, { "docid": "85e42e9dd33ed5ece93aa73a6fe1b6e3", "text": "In the present work CO2 continuous laser welding process was successfully applied and optimized for joining a dissimilar AISI 316 stainless steel and AISI 1009 low carbon steel plates. Laser power, welding speed, and defocusing distance combinations were carefully selected with the objective of producing welded joint with complete penetration, minimum fusion zone size and acceptable welding profile. Fusion zone area and shape of dissimilar austenitic stainless steel with ferritic low carbon steel were evaluated as a function of the selected laser welding parameters. Taguchi approach was used as statistical design of experiment (DOE) technique for optimizing the selected welding parameters in terms of minimizing the fusion zone. Mathematical models were developed to describe the influence of the selected parameters on the fusion zone area and shape, to predict its value within the limits of the variables being studied. The result indicates that the developed models can predict the responses satisfactorily.", "title": "" }, { "docid": "d15ce9f62f88a07db6fa427fae61f26c", "text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.", "title": "" }, { "docid": "54ea2e0435e1a6a3554d420dab3b2f54", "text": "A lack of information security awareness within some parts of society as well as some organisations continues to exist today. Whilst we have emerged from the threats of late 1990s of viruses such as Code Red and Melissa, through to the phishing emails of the mid 2000’s and the financial damage some such as the Nigerian scam caused, we continue to react poorly to new threats such as demanding money via SMS with a promise of death to those who won’t pay. So is this lack of awareness translating into problems within the workforce? There is often a lack of knowledge as to what is an appropriate level of awareness for information security controls across an organisation. This paper presents the development of a theoretical framework and model that combines aspects of information security best practice standards as presented in ISO/IEC 27002 with theories of Situation Awareness. The resultant model is an information security awareness capability model (ISACM). A preliminary survey is being used to develop the Awareness Importance element of the model and will leverage the opinions of information security professionals. A subsequent survey is also being developed to measure the Awareness Capability element of the model. This will present scenarios that test Level 1 situation awareness (perception), Level 2 situation awareness (comprehension) and finally Level 3 situation awareness (projection). Is it time for awareness of information security to now hit the mainstream of society, governments and organisations?", "title": "" }, { "docid": "e2c31d73534beb59d96a316bb109c0c5", "text": "This paper presents the development of a wireless low power reconfigurable self-calibrated multi-sensing platform for gas sensing applications. The proposed electronic nose (EN) system monitors gas temperatures, concentrations, and mixtures wirelessly using the radio-frequency identification (RFID) technology. The EN takes the form of a set of gas and temperature sensors and multiple pattern recognition algorithms implemented on the Zynq system on chip (SoC) platform. The gas and temperature sensors are integrated on a semi-passive RFID tag to reduce the consumed power. Various gas sensors are tested, including an in-house fabricated <inline-formula> <tex-math notation=\"LaTeX\">$4\\times 4$ </tex-math></inline-formula> <italic>SnO</italic><sub>2</sub>based sensor and seven commercial Figaro sensors. The data is transmitted to the Zynq based processing unit using a RFID reader, where it is processed using multiple pattern recognition algorithms for dimensionality reduction and classification. Multiple algorithms are explored for optimum performance, including principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction while decision tree (DT) and k-nearest neighbors (KNN) are assessed for classification purpose. Different gases are targeted at diverse concentration, including carbon monoxide (<italic>CO</italic>), ethanol (<italic>C</italic><sub>2</sub><italic>H</italic><sub>6</sub><italic>O</italic>), carbon dioxide (<italic>CO</italic><sub>2</sub>), propane (<italic>C</italic><sub>3</sub><italic>H</italic><sub>8</sub>), ammonia (<italic>NH</italic><sub>3</sub>), and hydrogen (<italic>H</italic><sub>2</sub>). An accuracy of 100% is achieved in many cases with an overall accuracy above 90% in most scenarios. Finally, the hardware/software heterogeneous solution to implementation PCA, LDA, DT, and KNN on the Zynq SoC shows promising results in terms of resources usage, power consumption, and processing time.", "title": "" }, { "docid": "bd0b867687d41c4e9b730e199b1dde7b", "text": "Intracellular electromanipulation (ICEM), the manipulation of substructures of biological cells by means of externally applied electric fields requires electrical pulses of nanosecond to tens of nanosecond duration and amplitudes of tens of kilovolt. The load in these bioelectric experiments, a cuvette containing biological cells, immersed in a solution of high conductivity, is generally on the order of 10 /spl Omega/. Strip line pulsers satisfy the condition of low impedance. A Blumlein strip line pulser with a pressurized spark gap as switch has been designed. The electrical characteristics of a particular pulse generator, which produces an 8 ns pulse with voltages between 2 kV and 30 kV across a 10 /spl Omega/ load are described.", "title": "" } ]
scidocsrr
c61213e12cf8551319df6b7e761fd4c3
Multi-agent based intrusion prevention and mitigation architecture for software defined networks
[ { "docid": "7b730ec53bcc62f49899a5f7a2bc590d", "text": "It is difficult to build a real network to test novel experiments. OpenFlow makes it easier for researchers to run their own experiments by providing a virtual slice and configuration on real networks. Multiple users can share the same network by assigning a different slice for each one. Users are given the responsibility to maintain and use their own slice by writing rules in a FlowTable. Misconfiguration problems can arise when a user writes conflicting rules for single FlowTable or even within a path of multiple OpenFlow switches that need multiple FlowTables to be maintained at the same time.\n In this work, we describe a tool, FlowChecker, to identify any intra-switch misconfiguration within a single FlowTable. We also describe the inter-switch or inter-federated inconsistencies in a path of OpenFlow switches across the same or different OpenFlow infrastructures. FlowChecker encodes FlowTables configuration using Binary Decision Diagrams and then uses the model checker technique to model the inter-connected network of OpenFlow switches.", "title": "" } ]
[ { "docid": "7c4822a90e594a27ddb9d6dd3e6aeb38", "text": "It is shown that if there are P noncoincident input patterns to learn and a two-layered feedforward neural network having P-1 sigmoidal hidden neuron and one dummy hidden neuron is used for the learning, then any suboptimal equilibrium point of the corresponding error surface is unstable in the sense of Lyapunov. This result leads to a sufficient local minima free condition for the backpropagation learning.", "title": "" }, { "docid": "88048217d8d052dbe1d2b74145be76b5", "text": "Human learners, including infants, are highly sensitive to structure in their environment. Statistical learning refers to the process of extracting this structure. A major question in language acquisition in the past few decades has been the extent to which infants use statistical learning mechanisms to acquire their native language. There have been many demonstrations showing infants' ability to extract structures in linguistic input, such as the transitional probability between adjacent elements. This paper reviews current research on how statistical learning contributes to language acquisition. Current research is extending the initial findings of infants' sensitivity to basic statistical information in many different directions, including investigating how infants represent regularities, learn about different levels of language, and integrate information across situations. These current directions emphasize studying statistical language learning in context: within language, within the infant learner, and within the environment as a whole. WIREs Cogn Sci 2010 1 906-914 This article is categorized under: Linguistics > Language Acquisition Psychology > Language.", "title": "" }, { "docid": "2e3f8e33947f8bdba390a8818330f8e4", "text": "We introduce a new technique for designing miniaturized-element frequency selective surfaces (MEFSSs) with narrowband, bandpass responses of order N ≥ 2. The proposed structure is composed of two-dimensional periodic arrays of subwavelength inductive wire grids separated by dielectric substrates. A simple equivalent circuit model, composed of transmission line resonators coupled together with shunt inductors, is presented for this structure. Using this equivalent circuit model, an analytical synthesis procedure is developed that can be used to synthesize the MEFSS from its desired system-level performance indicators such as the center frequency of operation and bandwidth. Using this synthesis procedure, a prototype of the proposed MEFSS with a second-order bandpass response, center frequency of 21 GHz, and fractional bandwidth of 5% is designed, fabricated, and experimentally characterized. The measurement results confirm the theoretical predictions and the design procedure of the structure and demonstrate that the proposed MEFSS has a stable frequency response with respect to the angle of incidence of the EM wave in the ±40° range for both TE and TM polarizations of incidence.", "title": "" }, { "docid": "b7189c1b1dc625fb60a526d81c0d0a89", "text": "This paper presents a development of an anthropomorphic robot hand, `KITECH Hand' that has 4 full-actuated fingers. Most robot hands have small size simultaneously many joints as compared with robot manipulators. Components of actuator, gear, and sensors used for building robots are not small and are expensive, and those make it difficult to build a small sized robot hand. Differently from conventional development of robot hands, KITECH hand adopts a RC servo module that is cheap, easily obtainable, and easy to handle. The RC servo module that have been already used for several small sized humanoid can be new solution of building small sized robot hand with many joints. The feasibility of KITECH hand in object manipulation is shown through various experimental results. It is verified that the modified RC servo module is one of effective solutions in the development of a robot hand.", "title": "" }, { "docid": "3576264345c3e02b36ac1b52c2ad48d3", "text": "The block matching 3D (BM3D) is an efficient image model, which has found few applications other than its niche area of denoising. We will develop a magnetic resonance imaging (MRI) reconstruction algorithm, which uses decoupled iterations alternating over a denoising step realized by the BM3D algorithm and a reconstruction step through an optimization formulation. The decoupling of the two steps allows the adoption of a strategy with a varying regularization parameter, which contributes to the reconstruction performance. This new iterative algorithm efficiently harnesses the power of the nonlocal, image-dependent BM3D model. The MRI reconstruction performance of the proposed algorithm is superior to state-of-the-art algorithms from the literature. A convergence analysis of the algorithm is also presented.", "title": "" }, { "docid": "519172fb24e370a24da92711d827bf77", "text": "We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the executionguided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%.", "title": "" }, { "docid": "c5f05fd620e734506874c8ec9e839535", "text": "Superficial vein thrombosis is a rare pathology that was first described by Mordor, although his description of phlebitis was observed exclusively at the thoracic wall. In 1955, Braun-Falco described penile thrombosis and later superficial penile vein thrombosis was first reported by Helm and Hodge. Mondor's disease of the penis is a rare entity with a reported incidence of 1.39%. It is described most of the time as a self-limited disease however it causes great morbidity to the patient who suffers from it. The pathogenesis of Mondor's disease is unknown. Its diagnosis is based on clinical signs such as a cordlike induration on the dorsal face of the penis, and imaging studies, doppler ultrasound is the instrument of choice. Treatment is primarily symptomatic but some cases may require surgical management however an accurate diagnostic resolves almost every case. We will describe the symptoms, diagnosis, and treatment of superficial thrombophlebitis of the dorsal vein of the penis.", "title": "" }, { "docid": "22ad829acba8d8a0909f2b8e31c1f0c3", "text": "Covariance matrices capture correlations that are invaluable in modeling real-life datasets. Using all d elements of the covariance (in d dimensions) is costly and could result in over-fitting; and the simple diagonal approximation can be over-restrictive. In this work, we present a new model, the Low-Rank Gaussian Mixture Model (LRGMM), for modeling data which can be extended to identifying partitions or overlapping clusters. The curse of dimensionality that arises in calculating the covariance matrices of the GMM is countered by using low-rank perturbed diagonal matrices. The efficiency is comparable to the diagonal approximation, yet one can capture correlations among the dimensions. Our experiments reveal the LRGMM to be an efficient and highly applicable tool for working with large high-dimensional datasets.", "title": "" }, { "docid": "8746c488535baf8d715232811ca4c8ed", "text": "To optimize polysaccharide extraction from Spirulina sp., the effect of solid-to-liquid ratio, extraction temperature and time were investigated using Box-Behnken experimental design and response surface methodology. The results showed that extraction temperature and solid-to-liquid ratio had a significant impact on the yield of polysaccharides. A polysaccharides yield of around 8.3% dry weight was obtained under the following optimized conditions: solid-to-liquid ratio of 1:45, temperature of 90°C, and time of 120 min. The polysaccharide extracts contained rhamnose, which accounted for 53% of the total sugars, with a phenolic content of 45 mg GAE/g sample.", "title": "" }, { "docid": "368a3dd36283257c5573a7e1ab94e930", "text": "This paper develops the multidimensional binary search tree (or <italic>k</italic>-d tree, where <italic>k</italic> is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The <italic>k</italic>-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an <italic>n</italic> record file are: insertion, <italic>O</italic>(log <italic>n</italic>); deletion of the root, <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-1)/<italic>k</italic></supscrpt>); deletion of a random node, <italic>O</italic>(log <italic>n</italic>); and optimization (guarantees logarithmic performance of searches), <italic>O</italic>(<italic>n</italic> log <italic>n</italic>). Search algorithms are given for partial match queries with <italic>t</italic> keys specified [proven maximum running time of <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-<italic>t</italic>)/<italic>k</italic></supscrpt>)] and for nearest neighbor queries [empirically observed average running time of <italic>O</italic>(log <italic>n</italic>).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that <italic>k</italic>-d trees could be quite useful in many applications, and examples of potential uses are given.", "title": "" }, { "docid": "0f50b3dd947b9a04d121079e0fa8f10e", "text": "Twitter has undoubtedly caught the attention of both the general public, and academia as a microblogging service worthy of study and attention. Twitter has several features that sets it apart from other social media/networking sites, including its 140 character limit on each user's message (tweet), and the unique combination of avenues via which information is shared: directed social network of friends and followers, where messages posted by a user is broadcast to all its followers, and the public timeline, which provides real time access to posts or tweets on specific topics for everyone. While the character limit plays a role in shaping the type of messages that are posted and shared, the dual mode of sharing information (public vs posts to one's followers) provides multiple pathways in which a posting can propagate through the user landscape via forwarding or \"Retweets\", leading us to ask the following questions: How does a message resonate and spread widely among the users on Twitter, and are the resulting cascade dynamics different due to the unique features of Twitter? What role does content of a message play in its popularity? Realizing that tweet content would play a major role in the information propagation dynamics (as borne out by the empirical results reported in this paper), we focused on patterns of information propagation on Twitter by observing the sharing and reposting of messages around a specific topic, i.e. the Iranian election.\n We know that during the 2009 post-election protests in Iran, Twitter and its large community of users played an important role in disseminating news, images, and videos worldwide and in documenting the events. We collected tweets of more than 20 million publicly accessible users on Twitter and analyzed over three million tweets related to the Iranian election posted by around 500K users during June and July of 2009. Our results provide several key insights into the dynamics of information propagation that are special to Twitter. For example, the tweet cascade size distribution is a power-law with exponent of -2.51 and more than 99% of the cascades have depth less than 3. The exponent is different from what one expects from a branching process (usually used to model information cascades) and so is the shallow depth, implying that the dynamics underlying the cascades are potentially different on Twitter. Similarly, we are able to show that while Twitter's Friends-Followers network structure plays an important role in information propagation through retweets (re-posting of another user's message), the search bar and trending topics on Twitter's front page offer other significant avenues for the spread of information outside the explicit Friends-Followers network. We found that at most 63.7% of all retweets in this case were reposts of someone the user was following directly. We also found that at least 7% of retweets are from the public posts, and potentially more than 30% of retweets are from the public timeline. In the end, we examined the context and content of the kinds of information that gained the attention of users and spread widely on Twitter. Our data indicates that the retweet probabilities are highly content dependent.", "title": "" }, { "docid": "705694c36d36ca6950740d754160f4bd", "text": "There is a growing concern that excessive and uncontrolled use of Facebook not only interferes with performance at school or work but also poses threats to physical and psychological well-being. The present research investigated how two individual difference variables--social anxiety and need for social assurance--affect problematic use of Facebook. Drawing on the basic premises of the social skill model of problematic Internet use, we hypothesized that social anxiety and need for social assurance would be positively correlated with problematic use of Facebook. Furthermore, it was predicted that need for social assurance would moderate the relationship between social anxiety and problematic use. A cross-sectional online survey was conducted with a college student sample in the United States (N=243) to test the proposed hypotheses. Results showed that both social anxiety and need for social assurance had a significant positive association with problematic use of Facebook. More importantly, the data demonstrated that need for social assurance served as a significant moderator of the relationship between social anxiety and problematic Facebook use. The positive association between social anxiety and problematic Facebook use was significant only for Facebook users with medium to high levels of need for social assurance but not for those with a low level of need for social assurance. Theoretical and practical implications of these findings were discussed.", "title": "" }, { "docid": "9837e331cf1c2a5bb0cee92e4ae44ca5", "text": "Isocitrate dehydrogenase 2 (IDH2) is located in the mitochondrial matrix. IDH2 acts in the forward Krebs cycle as an NADP(+)-consuming enzyme, providing NADPH for maintenance of the reduced glutathione and peroxiredoxin systems and for self-maintenance by reactivation of cystine-inactivated IDH2 by glutaredoxin 2. In highly respiring cells, the resulting NAD(+) accumulation then induces sirtuin-3-mediated activating IDH2 deacetylation, thus increasing its protective function. Reductive carboxylation of 2-oxoglutarate by IDH2 (in the reverse Krebs cycle direction), which consumes NADPH, may follow glutaminolysis of glutamine to 2-oxoglutarate in cancer cells. When the reverse aconitase reaction and citrate efflux are added, this overall \"anoxic\" glutaminolysis mode may help highly malignant tumors survive aglycemia during hypoxia. Intermittent glycolysis would hypothetically be required to provide ATP. When oxidative phosphorylation is dormant, this mode causes substantial oxidative stress. Arg172 mutants of human IDH2-frequently found with similar mutants of cytosolic IDH1 in grade 2 and 3 gliomas, secondary glioblastomas, and acute myeloid leukemia-catalyze reductive carboxylation of 2-oxoglutarate and reduction to D-2-hydroxyglutarate, which strengthens the neoplastic phenotype by competitive inhibition of histone demethylation and 5-methylcytosine hydroxylation, leading to genome-wide histone and DNA methylation alternations. D-2-hydroxyglutarate also interferes with proline hydroxylation and thus may stabilize hypoxia-induced factor α.", "title": "" }, { "docid": "1878c50133ec1a66dc1ff740d3948894", "text": "This work presents a script-based development environment aimed at allowing users to easily design and create mechanical bodies for folded plastic robots. The origami-inspired fabrication process is inexpensive and widely accessible, and the tools developed in this work allow for open source design sharing and modular reuse. Designs are generated by recursively combining mechanical components - from primitive building blocks, through mechanisms and assemblies, to full robots - in a flexible yet well-defined manner. This process was used to design robotic elements of increasing complexity up to a multi-degree-of-freedom compliant manipulator arm, demonstrating the power of this system. The developed system is extensible, opening avenues for further research ultimately leading to the development of a complete robot compiler.", "title": "" }, { "docid": "c47b59ea14b86fa18e69074129af72ec", "text": "Multiple networks naturally appear in numerous high-impact applications. Network alignment (i.e., finding the node correspondence across different networks) is often the very first step for many data mining tasks. Most, if not all, of the existing alignment methods are solely based on the topology of the underlying networks. Nonetheless, many real networks often have rich attribute information on nodes and/or edges. In this paper, we propose a family of algorithms FINAL to align attributed networks. The key idea is to leverage the node/edge attribute information to guide (topology-based) alignment process. We formulate this problem from an optimization perspective based on the alignment consistency principle, and develop effective and scalable algorithms to solve it. Our experiments on real networks show that (1) by leveraging the attribute information, our algorithms can significantly improve the alignment accuracy (i.e., up to a 30% improvement over the existing methods); (2) compared with the exact solution, our proposed fast alignment algorithm leads to a more than 10 times speed-up, while preserving a 95% accuracy; and (3) our on-query alignment method scales linearly, with an around 90% ranking accuracy compared with our exact full alignment method and a near real-time response time.", "title": "" }, { "docid": "ddd4ccf3d68d12036ebb9e5b89cb49b8", "text": "This paper presents a modified FastSLAM approach for the specific application of radar sensors using the Doppler information to increase the localization and map accuracy. The developed approach is based on the FastSLAM 2.0 algorithm. It is shown how the FastSLAM 2.0 approach can be significantly improved by taking the Doppler information into account. Therefore, the modelled, so-called expected Doppler, and the measured Doppler are compared for every detection. Both, simulations and experiments on real world data show the increase in accuracy of the modified FastSLAM approach by incorporating the Doppler measurements of automotive radar sensors. The proposed algorithm is compared to the state-of-the-art FastSLAM 2.0 algorithm and the vehicle odometry, whereas profiles of an Automotive Dynamic Motion Analyzer serve as the reference.", "title": "" }, { "docid": "8eafcf061e2b9cda4cd02de9bf9a31d1", "text": "Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.", "title": "" }, { "docid": "5fca35f8c075e799ccbb445cb62e0bdf", "text": "While constantly rising, the prevalence of allergies is globally one of the highest among chronic diseases. Current treatments of allergic diseases include the application of anti-histamines, immunotherapy, steroids, and anti-immunoglobulin E (IgE) antibodies. Here we report mammalian cells engineered with a synthetic signaling cascade able to monitor extracellular pathophysiological levels of interleukin 4 and interleukin 13, two main cytokines orchestrating allergic inflammation. Upon activation of transgenic cells by these cytokines, designed ankyrin repeat protein (DARPin) E2_79, a non-immunogenic protein binding human IgE, is secreted in a precisely controlled and reversible manner. Using human whole blood cell culturing, we demonstrate that the mammalian dual T helper 2 cytokine sensor produces sufficient levels of DARPin E2_79 to dampen histamine release in allergic subjects exposed to allergens. Hence, therapeutic gene networks monitoring disease-associated cytokines coupled with in situ production, secretion and systemic delivery of immunomodulatory biologics may foster advances in the treatment of allergies. The standard treatment for an allergic response is anti-histamines, steroids and anti-IgE antibodies. Here the authors present a genetic circuit that senses IL-4 and IL-13 and responses with DARPin production to bind IgE.", "title": "" }, { "docid": "e6e74971af2576ff119d277927727659", "text": "In Germany there is limited information available about the distribution of the tropical rat mite (Ornithonyssus bacoti) in rodents. A few case reports show that this hematophagous mite species may also cause dermatitis in man. Having close body contact to small rodents is an important question for patients with pruritic dermatoses. The definitive diagnosis of this ectoparasitosis requires the detection of the parasite, which is more likely to be found in the environment of its host (in the cages, in the litter or in corners or cracks of the living area) than on the hosts' skin itself. A case of infestation with tropical rat mites in a family is reported here. Three mice that had been removed from the home two months before were the reservoir. The mites were detected in a room where the cage with the mice had been placed months ago. Treatment requires the eradication of the parasites on its hosts (by a veterinarian) and in the environment (by an exterminator) with adequate acaricides such as permethrin.", "title": "" }, { "docid": "e489bf53271cb75de82cdb5aec5196e6", "text": "This paper presents the sensitivity optimization of a microwave biosensor dedicated to the analysis of a single living biological cell from 40 MHz to 40 GHz, directly in its culture medium. To enhance the sensor sensitivity, different capacitive gap located in the center of the biosensor, below the cell position, have been evaluated with different beads sizes. The best capacitive and conductive contrasts have been reached for a gap width of 5 μm with beads exhibiting diameters of 10 and 20 μm, due to electromagnetic field penetration in the beads. Contrasts improvement of 40 and 60 % have been achieved with standard deviations in the order of only 4% and 6% for the capacitive and conductive contrasts respectively. This sensor therefore permits to measure single living biological cells directly in their culture medium with capacitive and conductive contrasts of 0.4 fF at 5 GHz and 85 μS at 40 GHz, and associated standard deviations estimated at 7% and 14% respectively.", "title": "" } ]
scidocsrr
373c4083b5a6245461887675624550a8
Generating Holistic 3D Scene Abstractions for Text-Based Image Retrieval
[ { "docid": "46674077de97f82bc543f4e8c0a8243a", "text": "Recently, multiple formulations of vision problems as probabilistic inversions of generative models based on computer graphics have been proposed. However, applications to 3D perception from natural images have focused on low-dimensional latent scenes, due to challenges in both modeling and inference. Accounting for the enormous variability in 3D object shape and 2D appearance via realistic generative models seems intractable, as does inverting even simple versions of the many-tomany computations that link 3D scenes to 2D images. This paper proposes and evaluates an approach that addresses key aspects of both these challenges. We show that it is possible to solve challenging, real-world 3D vision problems by approximate inference in generative models for images based on rendering the outputs of probabilistic CAD (PCAD) programs. Our PCAD object geometry priors generate deformable 3D meshes corresponding to plausible objects and apply affine transformations to place them in a scene. Image likelihoods are based on similarity in a feature space based on standard mid-level image representations from the vision literature. Our inference algorithm integrates single-site and locally blocked Metropolis-Hastings proposals, Hamiltonian Monte Carlo and discriminative datadriven proposals learned from training data generated from our models. We apply this approach to 3D human pose estimation and object shape reconstruction from single images, achieving quantitative and qualitative performance improvements over state-of-the-art baselines.", "title": "" }, { "docid": "f2661b3833c6bcf95c877b56cc9d3c68", "text": "This paper develops a novel framework for semantic image retrieval based on the notion of a scene graph. Our scene graphs represent objects (“man”, “boat”), attributes of objects (“boat is white”) and relationships between objects (“man standing on boat”). We use these scene graphs as queries to retrieve semantically related images. To this end, we design a conditional random field model that reasons about possible groundings of scene graphs to test images. The likelihoods of these groundings are used as ranking scores for retrieval. We introduce a novel dataset of 5,000 human-generated scene graphs grounded to images and use this dataset to evaluate our method for image retrieval. In particular, we evaluate retrieval using full scene graphs and small scene subgraphs, and show that our method outperforms retrieval methods that use only objects or low-level image features. In addition, we show that our full model can be used to improve object localization compared to baseline methods.", "title": "" } ]
[ { "docid": "81d11f44d55e57d95a04f9a1ea35223c", "text": "In many research fields such as Psychology, Linguistics, Cognitive Science and Artificial Intelligence, computing semantic similarity between words is an important issue. In this paper a new semantic similarity metric, that exploits some notions of the feature based theory of similarity and translates it into the information theoretic domain, which leverages the notion of Information Content (IC), is presented. In particular, the proposed metric exploits the notion of intrinsic IC which quantifies IC values by scrutinizing how concepts are arranged in an ontological structure. In order to evaluate this metric, an on line experiment asking the community of researchers to rank a list of 65 word pairs has been conducted. The experiment’s web setup allowed to collect 101 similarity ratings and to differentiate native and non-native English speakers. Such a large and diverse dataset enables to confidently evaluate similarity metrics by correlating them with human assessments. Experimental evaluations using WordNet indicate that the proposed metric, coupled with the notion of intrinsic IC, yields results above the state of the art. Moreover, the intrinsic IC formulation also improves the accuracy of other IC-based metrics. In order to investigate the generality of both the intrinsic IC formulation and proposed similarity metric a further evaluation using the MeSH biomedical ontology has been performed. Even in this case significant results were obtained. The proposed metric and several others have been implemented in the Java WordNet Similarity Library.", "title": "" }, { "docid": "68118c94d8e00031a7c9996ab282881f", "text": "A cascadable power-on-reset (POR) delay element consuming nanowatt of peak power was developed to be used in very compact power-on-reset pulse generator (POR-PG) circuits. Operation principles and features of the POR delay element were presented in this paper. The delay element was designed, and fabricated in a 0.5µm 2P3M CMOS process. It was determined from simulation as well as measurement results that the delay element works wide supply voltage ranges between 1.8 volt and 5 volt and supply voltage rise times between 100nsec and 1msec allowing wide dynamic range POR-PG circuits. It also has very small silicon footprint. Layout size of a single POR delay element was 35µm x 55µm in 0.5µm CMOS process.", "title": "" }, { "docid": "4cb49a91b5a30909c99138a8e36badcd", "text": "The main goal of Business Process Management (BPM) is conceptualising, operationalizing and controlling workflows in organisations based on process models. In this paper we discuss several limitations of the workflow paradigm and suggest that process models can also play an important role in analysing how organisations think about themselves through storytelling. We contrast the workflow paradigm with storytelling through a comparative analysis. We also report a case study where storytelling has been used to elicit and document the practices of an IT maintenance team. This research contributes towards the development of better process modelling languages and tools.", "title": "" }, { "docid": "0e1cc3ddf39c9fff13894cf1d924c8cc", "text": "This paper introduces NSGA-Net, an evolutionary approach for neural architecture search (NAS). NSGA-Net is designed with three goals in mind: (1) a NAS procedure for multiple, possibly conflicting, objectives, (2) efficient exploration and exploitation of the space of potential neural network architectures, and (3) output of a diverse set of network architectures spanning a trade-off frontier of the objectives in a single run. NSGA-Net is a population-based search algorithm that explores a space of potential neural network architectures in three steps, namely, a population initialization step that is based on prior-knowledge from hand-crafted architectures, an exploration step comprising crossover and mutation of architectures and finally an exploitation step that applies the entire history of evaluated neural architectures in the form of a Bayesian Network prior. Experimental results suggest that combining the objectives of minimizing both an error metric and computational complexity, as measured by FLOPS, allows NSGA-Net to find competitive neural architectures near the Pareto front of both objectives on two different tasks, object classification and object alignment. NSGA-Net obtains networks that achieve 3.72% (at 4.5 million FLOP) error on CIFAR-10 classification and 8.64% (at 26.6 million FLOP) error on the CMU-Car alignment task. Code available at: https://github.com/ianwhale/nsga-net.", "title": "" }, { "docid": "10c5e27114860c802e25f7f69538093f", "text": "Thoracoscopic repair of esophageal atresia is considered to be one of the more advanced pediatric surgical procedures, and it undoubtedly has a learning curve. This is a single-center study that was designed to determine the learning curve of thoracoscopic repair of esophageal atresia. The study involved comparison of the first and second five-year outcomes of thoracoscopic esophageal atresia repair. The demographics of the two groups were comparable. There was a remarkable reduction of postoperative leakage or stenosis, and recurrence of fistulae, in spite of the fact that nowadays the procedure is mainly performed by young staff members and fellows. There is a considerable learning curve for thoracoscopic repair of esophageal atresia. Centers with the ambition to start up a program for thoracoscopic repair of esophageal atresia should do so with the guidance of experienced centers.", "title": "" }, { "docid": "b0892ff39abac8a35c88a3b6aa6a9045", "text": "Video-based fire detection is currently a fairly common application with the growth in the number of installed surveillance video systems. Moreover, the related processing units are becoming more powerful. Smoke is an early sign of most fires; therefore, selecting an appropriate smoke-detection method is essential. However, detecting smoke without creating a false alarm remains a challenging problem for open or large spaces with the disturbances of common moving objects, such as pedestrians and vehicles. This study proposes a novel video-based smoke-detection method that can be incorporated into a surveillance system to provide early alerts. In this study, the process of extracting smoke features from candidate regions was accomplished by analyzing the spatial and temporal characteristics of video sequences for three important features: edge blurring, gradual energy changes, and gradual chromatic configuration changes. The proposed spatialtemporal analysis technique improves the feature extraction of gradual energy changes. In order to make the video smoke-detection results more reliable, these three features were combined using a support vector machine (SVM) technique and a temporal-based alarm decision unit (ADU) was also introduced. The effectiveness of the proposed algorithm was evaluated on a PC with an Intel R © Core2 Duo CPU (2.2 GHz) and 2 GB RAM. The average processing time was 32.27 ms per frame; i.e., the proposed algorithm can process 30.98 frames per second. Experimental results showed that the proposed system can detect smoke effectively with a low false-alarm rate and a short reaction time in many real-world scenarios.", "title": "" }, { "docid": "3eccedb5a9afc0f7bc8b64c3b5ff5434", "text": "The design of a high impedance, high Q tunable load is presented with operating frequency between 400MHz and close to 6GHz. The bandwidth is made independently tunable of the carrier frequency by using an active inductor resonator with multiple tunable capacitances. The Q factor can be tuned from a value 40 up to 300. The circuit is targeted at 5G wideband applications requiring narrow band filtering where both centre frequency and bandwidth needs to be tunable. The circuit impedance is applied to the output stage of a standard CMOS cascode and results show that high Q factors can be achieved close to 6GHz with 11dB rejection at 20MHz offset from the centre frequency. The circuit architecture takes advantage of currently available low cost, low area tunable capacitors based on micro-electromechanical systems (MEMS) and Barium Strontium Titanate (BST).", "title": "" }, { "docid": "d3b501c19b65d276ec6f349b35f4da1f", "text": "The design of a macroscope constructed with photography lenses is described and several applications are demonstrated. The macroscope incorporates epi-illumination, a 0.4 numerical aperture, and a 40 mm working distance for imaging wide fields in the range of 1.5-20 mm in diameter. At magnifications of 1X to 2.5X, fluorescence images acquired with the macroscope were 100-700 times brighter than those obtained with commercial microscope objectives at similar magnifications. In several biological applications, the improved light collection efficiency (20-fold, typical) not only minimized bleaching effects, but, in concert with improved illumination throughput (15-fold, typical), significantly enhanced object visibility as well. Reduced phototoxicity and increased signal-to-noise ratios were observed in the in vivo real-time optical imaging of cortical activity using voltage-sensitive dyes. Furthermore, the macroscope has a depth of field which is 5-10 times thinner than that of a conventional low-power microscope. This shallow depth of field has facilitated the imaging of cortical architecture based on activity-dependent intrinsic cortical signals in the living primate brain. In these reflection measurements large artifacts from the surface blood vessels, which were observed with conventional lenses, were eliminated with the macroscope.", "title": "" }, { "docid": "82917c4e6fb56587cc395078c14f3bb7", "text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.", "title": "" }, { "docid": "8fbbeeae48118cfd2f77e6a7bb224c0c", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. American Educational Research Association is collaborating with JSTOR to digitize, preserve and extend access to Educational Researcher.", "title": "" }, { "docid": "e516371d568f0c080a8c515d4ad1512c", "text": "Many diseases have been described to be associated with inflammatory processes. The currently available anti-inflammatory drug therapy is often not successful or causes intolerable side effects. Thus, new anti-inflammatory substances are still urgently needed. Plants were the first source of remedies in the history of mankind. Since their chemical characterization in the 19th century, herbal bioactive compounds have fueled drug development. Also, nowadays, new plant-derived agents continuously enrich our drug arsenal (e.g., vincristine, galantamine, and artemisinin). The number of new, pharmacologically active herbal ingredients, in particular that of anti-inflammatory compounds, rises continuously. The major obstacle in this field is the translation of preclinical knowledge into evidence-based clinical progress. Human trials of good quality are often missing or, when available, are frequently not suitable to really prove a therapeutical value. This minireview will summarize the current situation of 6 very prominent plant-derived anti-inflammatory compounds: curcumin, colchicine, resveratrol, capsaicin, epigallocatechin-3-gallate (EGCG), and quercetin. We will highlight their clinical potential and/or pinpoint an overestimation. Moreover, we will sum up the planned trials in order to provide insights into the inflammatory disorders that are hypothesized to be beneficially influenced by the compound.", "title": "" }, { "docid": "b769f7b96b9613132790a73752c2a08f", "text": "ITIL is the most widely used IT framework in majority of organizations in the world now. However, implementing such best practice experiences in an organization comes with some implementation challenges such as staff resistance, task conflicts and ambiguous orders. It means that implementing such framework is not easy and it can be caused of the organization destruction. This paper tries to describe overall view of ITIL framework and address major reasons on the failure of this framework’s implementation in the organizations", "title": "" }, { "docid": "510a43227819728a77ff0c7fa06fa2d0", "text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.", "title": "" }, { "docid": "0b777fa9b40050559826ec01285ea2ec", "text": "Honeyd (N. Provos, 2004) is a popular tool developed by Niels Provos that offers a simple way to emulate services offered by several machines on a single PC. It is a so called low interaction honeypot. Responses to incoming requests are generated thanks to ad hoc scripts that need to be written by hand. As a result, few scripts exist, especially for services handling proprietary protocols. In this paper, we propose a method to alleviate these problems by automatically generating new scripts. We explain the method and describe its limitations. We analyze the quality of the generated scripts thanks to two different methods. On the one hand, we have launched known attacks against a machine running our scripts; on the other hand, we have deployed that machine on the Internet, next to a high interaction honeypot during two months. For those attackers that have targeted both machines, we can verify if our scripts have, or not, been able to fool them. We also discuss the various tuning parameters of the algorithm that can be set to either increase the quality of the script or, at the contrary, to reduce its complexity", "title": "" }, { "docid": "c88d122bb8b91208301b2dcb32bac468", "text": "This paper presents a simulation of the speed control of a separately excited direct current motor (SEDM) using fuzzy logic control (FLC) in Matlab/Simulink environment. A fuzzy logic controller was designed to vary the motor’s speed by varying the armature voltage of the separately excited DC motor in the constant torque region (below the rated speed). The simulation results show that the armature voltage control method is better than field control method with regards to delay time and overshoot.", "title": "" }, { "docid": "9fa1b755805d889cff096acf2572f2e1", "text": "Watermarking embeds a secret message into a cover message. In media watermarking the secret is usually a copyright notice and the cover a digital image. Watermarking an object discourages intellectual property theft, or when such theft has occurred, allows us to prove ownership. The Software Watermarking problem can be described as follows. Embed a structure W into a program P such that: W can be reliably located and extracted from P even after P has been subjected to code transformations such as translation, optimization and obfuscation; W is stealthy; W has a high data rate; embedding W into P does not adversely affect the performance of P; and W has a mathematical property that allows us to argue that its presence in P is the result of deliberate actions. In this paper we describe a software watermarking technique in which a dynamic graph watermark is stored in the execution state of a program. Because of the hardness of pointer alias analysis such watermarks are difficult to attack automatically.", "title": "" }, { "docid": "01431d4ba95cebca0ca05f2920dd9171", "text": "There is an increasing interest in electric transportation. Most large manufacturers now produce hybrid versions of their popular models and in some countries electric cycles and scooter are now popular. Motor sport is often used to develop technology and in this paper designs for electric racing motorcycles are addressed. These are in-frame motors (rather than hub motors which can affect handling and are not as powerful). Typically 10 to 12 kW-hours of batteries can be carried on the cycle and the batteries are almost exhausted at the end of a race. Therefore very high efficiency over a range of operation is needed, but also the motors need to be compact and have high torque density. This paper examines the use of permanent magnet motors and possible designs.", "title": "" }, { "docid": "055cb9aca6b16308793944154dc7866a", "text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?", "title": "" }, { "docid": "69eb0f2e3ab9cc4628024f8f5fb6d63f", "text": "As data continues to be produced in massive amounts, with increasing volume, velocity and variety, big data projects are growing in frequency and importance. However, the growth in the use of big data has outstripped the knowledge of how to support teams that need to do big data projects. In fact, while much has been written in terms of the use of algorithms that can help generate insightful analysis, much less has been written about methodologies, tools and frameworks that could enable teams to more effectively and efficiently \"do\" big data projects. Hence, this paper discusses the key research questions relating methodologies, tools and frameworks to improve big data team effectiveness as well as the potential goals for a big data process methodology. Finally, the paper also discusses related domains, such as software development, operations research and business intelligence, since these fields might provide insight into how to define a big data process methodology.", "title": "" }, { "docid": "4d77d0df6444b6dee8ce2a2c0b0aefc8", "text": "As businesses become more dependent on information technology for their operations, IS managers are under increasing pressure to deliver quality applications software on time and within budget. Thus, in addition to their technical skills, they must master the necessary management skills to lead and control software development projects. The purpose of this tutorial is to present the fundamental concepts of modern project management and show how these concepts can be applied to software development projects. The tutorial presents a broad overview of current software project management practices that evolved over the years from a variety of complex projects. The subject is presented from the manager's rather than from the developer's perspective. The focus is on large and complex projects because these projects are the most challenging and in need of an effective project management discipline.", "title": "" } ]
scidocsrr
21bf7f70f0d491ca82687a7af229cc28
Bitcoin: Bubble or Blockchain
[ { "docid": "d549d4e7c30e004556ac78bdc4119b92", "text": "Bitcoin is a peer-to-peer cryptographic currency system. Since its introduction in 2008, Bitcoin has gained noticeable popularity, mostly due to its following properties: (1) the transaction fees are very low, and (2) it is not controlled by any central authority, which in particular means that nobody can “print” the money to generate inflation. Moreover, the transaction syntax allows to create the so-called contracts, where a number of mutually-distrusting parties engage in a protocol to jointly perform some financial task, and the fairness of this process is guaranteed by the properties of Bitcoin. Although the Bitcoin contracts have several potential applications in the digital economy, so far they have not been widely used in real life. This is partly due to the fact that they are cumbersome to create and analyze, and hence risky to use. In this paper we propose to remedy this problem by using the methods originally developed for the computer-aided analysis for hardware and software systems, in particular those based on the timed automata. More concretely, we propose a framework for modeling the Bitcoin contracts using the timed automata in the UPPAAL model checker. Our method is general and can be used to model several contracts. As a proof-of-concept we use this framework to model some of the Bitcoin contracts from our recent previous work. We then automatically verify their security in UPPAAL, finding (and correcting) some subtle errors that were difficult to spot by the manual analysis. We hope that our work can draw the attention of the researchers working on formal modeling to the problem of the Bitcoin contract verification, and spark off more research on this topic.", "title": "" } ]
[ { "docid": "8e8c566d93f11bd96318978dd4b21ed1", "text": "Recently, neural-network based word embedding models have been shown to produce high-quality distributional representations capturing both semantic and syntactic information. In this paper, we propose a grouping-based context predictive model by considering the interactions of context words, which generalizes the widely used CBOW model and Skip-Gram model. In particular, the words within a context window are split into several groups with a grouping function, where words in the same group are combined while different groups are treated as independent. To determine the grouping function, we propose a relatedness hypothesis stating the relationship among context words and propose several context grouping methods. Experimental results demonstrate better representations can be learned with suitable context groups.", "title": "" }, { "docid": "31efc351ebeaf1316c0c99fc2d3f3985", "text": "One of the roles of accounting is to provide information on business performance, either through financial accounting indicators or otherwise. Theoretical-empirical studies on the relationship between Corporate Financial Performance (CFP) and Corporate Social Performance (CSP) have increased in recent years, indicating the development of this research field. However, the contribution to the theory by empirical studies is made in an incremental manner, given that each study normally focuses on a particular aspect of the theory. Therefore, it is periodically necessary to conduct an analysis to evaluate how the aggregation of empirical studies has contributed to the evolution of the theory. Designing such an analysis was the objective of the present study. The theoretical framework covered the following: stakeholder theory, the relationship between CSP and CFP, good management theory, and slack resource theory. This research covered a 15-year period (1996 to 2010), and the data collection employed a search tool for the following databases: Ebsco, Proquest, and ISI. The sampling process obtained a set of 58 exclusively theoretical-empirical and quantitative articles that test the CSP-CFP relationship. The main results in the theoretical field reinforce the proposed positive relationship between CSP and CFP and good management theory and demonstrate a deficiency in the explanation of the temporal lag in the causal relationship between CSP and CFP as well as deficiencies in the description of the CSP construct. These results suggest future studies to research the temporal lag in the causal relationship between CSP and CFP and the possible reasons that the positive association between CSP and CFP has not been assumed in some empirical studies.", "title": "" }, { "docid": "310f5f415449929bfbd019aba0b020bd", "text": "In the context of deep learning, this article presents an original deep network, namely CentralNet, for the fusion of information coming from different sensors. This approach is designed to efficiently and automatically balance the tradeoff between early and late fusion (i.e., between the fusion of low-level versus high-level information). More specifically, at each level of abstraction—the different levels of deep networks—unimodal representations of the data are fed to a central neural network which combines them into a common embedding. In addition, a multiobjective regularization is also introduced, helping to both optimize the central network and the unimodal networks. Experiments on four multimodal datasets not only show the state-of-the-art performance but also demonstrate that CentralNet can actually choose the best possible fusion strategy for a given problem.", "title": "" }, { "docid": "de67aeb2530695bcc6453791a5fa8c77", "text": "Sebaceous carcinoma is a rare adenocarcinoma with variable degrees of sebaceous differentiation, most commonly found on periocular skin, but also occasionally occur extraocular. It can occur in isolation or as part of the MuirTorre syndrome. Sebaceous carcinomas are yellow or red nodules or plaques often with a friable surface, ulceration, or crusting. On histological examination, sebaceous carcinomas are typically poorly circumscribed, asymmetric, and infiltrative. Individual cells are pleomorphic with atypical nuclei, mitoses, and a coarsely vacuolated cytoplasm.", "title": "" }, { "docid": "4b432e49485b57ddb1921478f2917d4b", "text": "Dynamic perturbations of reaching movements are an important technique for studying motor learning and adaptation. Adaptation to non-contacting, velocity-dependent inertial Coriolis forces generated by arm movements during passive body rotation is very rapid, and when complete the Coriolis forces are no longer sensed. Adaptation to velocity-dependent forces delivered by a robotic manipulandum takes longer and the perturbations continue to be perceived even when adaptation is complete. These differences reflect adaptive self-calibration of motor control versus learning the behavior of an external object or 'tool'. Velocity-dependent inertial Coriolis forces also arise in everyday behavior during voluntary turn and reach movements but because of anticipatory feedforward motor compensations do not affect movement accuracy despite being larger than the velocity-dependent forces typically used in experimental studies. Progress has been made in understanding: the common features that determine adaptive responses to velocity-dependent perturbations of jaw and limb movements; the transfer of adaptation to mechanical perturbations across different contact sites on a limb; and the parcellation and separate representation of the static and dynamic components of multiforce perturbations.", "title": "" }, { "docid": "8ff4c6a5208b22a47eb5006c329817dc", "text": "Goal: To evaluate a novel kind of textile electrodes based on woven fabrics treated with PEDOT:PSS, through an easy fabrication process, testing these electrodes for biopotential recordings. Methods: Fabrication is based on raw fabric soaking in PEDOT:PSS using a second dopant, squeezing and annealing. The electrodes have been tested on human volunteers, in terms of both skin contact impedance and quality of the ECG signals recorded at rest and during physical activity (power spectral density, baseline wandering, QRS detectability, and broadband noise). Results: The electrodes are able to operate in both wet and dry conditions. Dry electrodes are more prone to noise artifacts, especially during physical exercise and mainly due to the unstable contact between the electrode and the skin. Wet (saline) electrodes present a stable and reproducible behavior, which is comparable or better than that of traditional disposable gelled Ag/AgCl electrodes. Conclusion: The achieved results reveal the capability of this kind of electrodes to work without the electrolyte, providing a valuable interface with the skin, due to mixed electronic and ionic conductivity of PEDOT:PSS. These electrodes can be effectively used for acquiring ECG signals. Significance: Textile electrodes based on PEDOT:PSS represent an important milestone in wearable monitoring, as they present an easy and reproducible fabrication process, very good performance in wet and dry (at rest) conditions and a superior level of comfort with respect to textile electrodes proposed so far. This paves the way to their integration into smart garments.", "title": "" }, { "docid": "d18a2e1811f2d11e88c9ae780a8ede23", "text": "In this paper, we present the design of error-resilient machine learning architectures by employing a distributed machine learning framework referred to as classifier ensemble (CE). CE combines several simple classifiers to obtain a strong one. In contrast, centralized machine learning employs a single complex block. We compare the random forest (RF) and the support vector machine (SVM), which are representative techniques from the CE and centralized frameworks, respectively. Employing the dataset from UCI machine learning repository and architecturallevel error models in a commercial 45 nm CMOS process, it is demonstrated that RF-based architectures are significantly more robust than SVM architectures in presence of timing errors due to process variations in near-threshold voltage (NTV) regions (0.3 V 0.7 V). In particular, the RF architecture exhibits a detection accuracy (Pdet) that varies by 3.2% while maintaining a median Pdet ≥ 0.9 at a gate level delay variation of 28.9% . In comparison, SVM exhibits a Pdet that varies by 16.8%. Additionally, we propose an error weighted voting technique that incorporates the timing error statistics of the NTV circuit fabric to further enhance robustness. Simulation results confirm that the error weighted voting achieves a Pdet that varies by only 1.4%, which is 12× lower compared to SVM.", "title": "" }, { "docid": "051c530bf9d49bf1066ddf856488dff1", "text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.", "title": "" }, { "docid": "79593cc56da377d834f33528b833641f", "text": "Machine learning offers a fantastically powerful toolkit f or building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the framework of technical debt , we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning. The goal of this paper is hig hlight several machine learning specific risk factors and design patterns to b e avoided or refactored where possible. These include boundary erosion, entanglem ent, hidden feedback loops, undeclared consumers, data dependencies, changes i n the external world, and a variety of system-level anti-patterns. 1 Machine Learning and Complex Systems Real world software engineers are often faced with the chall enge of moving quickly to ship new products or services, which can lead to a dilemma between spe ed of execution and quality of engineering. The concept of technical debtwas first introduced by Ward Cunningham in 1992 as a way to help quantify the cost of such decisions. Like incurri ng fiscal debt, there are often sound strategic reasons to take on technical debt. Not all debt is n ecessarily bad, but technical debt does tend to compound. Deferring the work to pay it off results in i ncreasing costs, system brittleness, and reduced rates of innovation. Traditional methods of paying off technical debt include re factoring, increasing coverage of unit tests, deleting dead code, reducing dependencies, tighten ng APIs, and improving documentation [4]. The goal of these activities is not to add new functionality, but to make it easier to add future improvements, be cheaper to maintain, and reduce the likeli hood of bugs. One of the basic arguments in this paper is that machine learn ing packages have all the basic code complexity issues as normal code, but also have a larger syst em-level complexity that can create hidden debt. Thus, refactoring these libraries, adding bet ter unit tests, and associated activity is time well spent but does not necessarily address debt at a systems level. In this paper, we focus on the system-level interaction betw e n machine learning code and larger systems as an area where hidden technical debt may rapidly accum ulate. At a system-level, a machine learning model may subtly erode abstraction boundaries. It may be tempting to re-use input signals in ways that create unintended tight coupling of otherw ise disjoint systems. Machine learning packages may often be treated as black boxes, resulting in la rge masses of “glue code” or calibration layers that can lock in assumptions. Changes in the exte rnal world may make models or input signals change behavior in unintended ways, ratcheting up m aintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating s intended may be difficult without careful design.", "title": "" }, { "docid": "747e46fc4621604d6f551d909cbdf42b", "text": "Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process. This demonstration shows a computational system that creates flavorful, novel, and perhaps healthy culinary recipes by drawing on big data techniques. It brings analytics algorithms together with disparate data sources from culinary science, chemistry, and hedonic psychophysics.\n In its most powerful manifestation, the system operates through a mixed-initiative approach to human-computer interaction via turns between human and computer. In particular, the sequential creation process is modeled after stages in human cognitive processes of creativity.\n The end result is an ingredient list, ingredient proportions, as well as a directed acyclic graph representing a partial ordering of culinary recipe steps.", "title": "" }, { "docid": "36fd1784579212b1df6248bfee7cc18a", "text": "The 'invisible hand' is a term originally coined by Adam Smith in The Theory of Moral Sentiments to describe the forces of self-interest, competition and supply and demand that regulate the resources in society. This metaphor continues to be used by economists to describe the self-regulating nature of a market economy. The same metaphor can be used to describe the RHO-specific guanine nucleotide dissociation inhibitor (RHOGDI) family, which operates in the background, as an invisible hand, using similar forces to regulate the RHO GTPase cycle.", "title": "" }, { "docid": "df4ca9ed339707e2135ed1eebb564fa1", "text": "Wireless-communication technology can be used to improve road safety and to provide Internet access inside vehicles. This paper proposes a cross-layer protocol called coordinated external peer communication (CEPEC) for Internet-access services and peer communications for vehicular networks. We assume that IEEE 802.16 base stations (BS) are installed along highways and that the same air interface is equipped in vehicles. Certain vehicles locating outside of the limited coverage of their nearest BSs can still get access to the Internet via a multihop route to their BSs. For Internet-access services, the objective of CEPEC is to increase the end-to-end throughput while providing a fairness guarantee in bandwidth usage among road segments. To achieve this goal, the road is logically partitioned into segments of equal length. A relaying head is selected in each segment that performs both local-packet collecting and aggregated packets relaying. The simulation results have shown that the proposed CEPEC protocol provides higher throughput with guaranteed fairness in multihop data delivery in vehicular networks when compared with the purely IEEE 802.16-based protocol.", "title": "" }, { "docid": "7498bc36a78f59eef834fdab5174e96f", "text": "We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floating-point numbers and the other for a result “as if” computed in K-fold precision. Faithful rounding means the computed result either is one of the immediate floating-point neighbors of the exact result or is equal to the exact sum if this is a floating-point number. The algorithms are based on our previous algorithms AccSum and PrecSum and improve them by up to 25%. The first algorithm adapts to the condition number of the sum; i.e., the computing time is proportional to the difficulty of the problem. The second algorithm does not need extra memory, and the computing time depends only on the number of summands and K. Both algorithms are the fastest known in terms of flops. They allow good instruction-level parallelism so that they are also fast in terms of measured computing time. The algorithms require only standard floating-point addition, subtraction, and multiplication in one working precision, for example, double precision.", "title": "" }, { "docid": "87f3c12df54f395b9a24ccfc4dd10aa8", "text": "The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.", "title": "" }, { "docid": "6274424e5e8d4092ff936e329336ba58", "text": "INTRODUCTION\nLabial fusion is described as partial or complete adherence of the labia minora. Adhesions of the labia are extremely rare in the reproductive population with only a few cases described in the literature and none reported with pregnancy.\n\n\nCASE PRESENTATION\nA 24-year-old woman who had extensively fused labia with a pinhole opening at the upper midline with menstrual delay was diagnosed at six weeks of pregnancy. The case and its management are presented.\n\n\nCONCLUSION\nThe condition was treated surgically with complete resolution of the urinary symptoms.", "title": "" }, { "docid": "a75d3395a1d4859b465ccbed8647fbfe", "text": "PURPOSE\nThe influence of a core-strengthening program on low back pain (LBP) occurrence and hip strength differences were studied in NCAA Division I collegiate athletes.\n\n\nMETHODS\nIn 1998, 1999, and 2000, hip strength was measured during preparticipation physical examinations and occurrence of LBP was monitored throughout the year. Following the 1999-2000 preparticipation physicals, all athletes began participation in a structured core-strengthening program, which emphasized abdominal, paraspinal, and hip extensor strengthening. Incidence of LBP and the relationship with hip muscle imbalance were compared between consecutive academic years.\n\n\nRESULTS\nAfter incorporation of core strengthening, there was no statistically significant change in LBP occurrence. Side-to-side extensor strength between athletes participating in both the 1998-1999 and 1999-2000 physicals were no different. After core strengthening, the right hip extensor was, on average, stronger than that of the left hip extensor (P = 0.0001). More specific gender differences were noted after core strengthening. Using logistic regression, female athletes with weaker left hip abductors had a more significant probability of requiring treatment for LBP (P = 0.009)\n\n\nCONCLUSION\nThe impact of core strengthening on collegiate athletes has not been previously examined. These results indicated no significant advantage of core strengthening in reducing LBP occurrence, though this may be more a reflection of the small numbers of subjects who actually required treatment. The core program, however, seems to have had a role in modifying hip extensor strength balance. The association between hip strength and future LBP occurrence, observed only in females, may indicate the need for more gender-specific core programs. The need for a larger scale study to examine the impact of core strengthening in collegiate athletes is demonstrated.", "title": "" }, { "docid": "cb6d3b025e0047a78c9641d5f10ecf07", "text": "Surgical robotics is an evolving field with great advances having been made over the last decade. The origin of robotics was in the science-fiction literature and from there industrial applications, and more recently commercially available, surgical robotic devices have been realized. In this review, we examine the field of robotics from its roots in literature to its development for clinical surgical use. Surgical mills and telerobotic devices are discussed, as are potential future developments.", "title": "" }, { "docid": "c22a769dee080ec2e145a12c8588f0f8", "text": "Chicken chicken chicken: chicken chicken. This is actually no major typing error but well and truly the title of both a publication in the 2006 Annals of Improbable Research [2] and a homonymous talk at the 2007 AAAS conference by the software engineer Doug Zongker, composed of totally serious looking texts, graphs and diagrams e based exclusively on a vocabulary restricted to the common name of Gallus gallus domesticus. Apart from having caused the open-plan hilarity of the scientific public, chicken and more generally fowl also happen to be the main reservoir and evolutionary playground for influenza A viruses that occasionally jump over to humans and even more occasionally cause a media e centered commotion. Indeed, manifold combinations of hemagglutinin H1 to H16 and neuraminidase N1 to N9, the two surface proteins used to subtype the virus, are all found in wild water birds and at least seven of them have made it into humans until now [3]. Interestingly, that renders birds apparently better at handing down infectious agents to humans than those closest relatives, African apes, who managed to pass “only” two diseases on to us. Admittedly, the agents in question are Plasmodium falciparum and HIV-1, but considering the broad range of available pathogens, the number remains low [4]. It remains nevertheless ambiguous if the simian pathogens are untalented or rather lacked the opportunities of their winged relatives, spreading their gatecrashers through direct contact on live poultry markets [3]. The latest edition of the series is H7N9, which caused its first human victims in February 2013 [5,6], after about two years of patching up starting from H7N3, H9N9 and a preliminary version of H7N9 [7], with the involuntary participation of ducks, bramblings and other migratory birds as vectors [6]. Above all praised as a victory of the rapid and coordinated reaction of the Chinese health instances and the fast and efficient sharing of information at the international level [4], from he media coverage point of view, things have been remarkably calm around H7N9 lately, despite the fact that the season-related second outbreak in China at the end of this winter caused way more fatalities than the one in 2013 [8]. Papers were busily", "title": "" }, { "docid": "8941cc8c4b2d7a354baf03fb52f43a07", "text": "Floor surfaces are notable for the diverse roles that they play in our negotiation of everyday environments. Haptic communication via floor surfaces could enhance or enable many computer-supported activities that involve movement on foot. In this paper, we discuss potential applications of such interfaces in everyday environments and present a haptically augmented floor component through which several interaction methods are being evaluated. We describe two approaches to the design of structured vibrotactile signals for this device. The first is centered on a musical phrase metaphor, as employed in prior work on tactile display. The second is based upon the synthesis of rhythmic patterns of virtual physical impact transients. We report on an experiment in which participants were able to identify communication units that were constructed from these signals and displayed via a floor interface at well above chance levels. The results support the feasibility of tactile information display via such interfaces and provide further indications as to how to effectively design vibrotactile signals for them.", "title": "" }, { "docid": "7e354ca56591a9116d651b53c6ab744d", "text": "We have implemented a concurrent copying garbage collector that uses replicating garbage collection. In our design, the client can continuously access the heap during garbage collection. No low-level synchronization between the client and the garbage collector is required on individual object operations. The garbage collector replicates live heap objects and periodically synchronizes with the client to obtain the client's current root set and mutation log. An experimental implementation using the Standard ML of New Jersey system on a shared-memory multiprocessor demonstrates excellent pause time performance and moderate execution time speedups.", "title": "" } ]
scidocsrr
6be79b9c169a7fd372748a0b335d0656
Hopping in legged systems — Modeling and simulation for the two-dimensional one-legged case
[ { "docid": "064505e942f5f8fd5f7e2db5359c7fe8", "text": "THE hopping of kangaroos is reminiscent of a bouncing ball or the action of a pogo stick. This suggests a significant storage and recovery of energy in elastic elements. One might surmise that the kangaroo's first hop would require a large amount of energy whereas subsequent hops could rely extensively on elastic rebound. If this were the case, then the kangaroo's unusual saltatory mode of locomotion should be an energetically inexpensive way to move.", "title": "" } ]
[ { "docid": "b2bfcd7d72bd9d774add0008dcab86c4", "text": "Titanium dioxide nanoparticles, obtained using the sol-gel method and modified with organic solvents, such as acetone, acetonitrile, benzene, diethyl ether, dimethyl sulfoxide, toluene, and chloroform, were used as the filler of polydimethylsiloxane-based electrorheological fluids. The effect of electric field strength on the shear stress and yield stress of electrorheological fluids was investigated, as well as the spectra of their dielectric relaxation in the frequency range from 25 to 106 Hz. Modification of titanium dioxide by polar molecules was found to enhance the electrorheological effect, as compared with unmodified TiO2, in accordance with the widely accepted concept of polar molecule dominated electrorheological effect (PM-ER). The most unexpected result of this study was an increase in the electrorheological effect during the application of nonpolar solvents with zero or near-zero dipole moments as the modifiers. It is suggested that nonpolar solvents, besides providing additional polarization effects at the filler particles interface, alter the internal pressure in the gaps between the particles. As a result, the filler particles are attracted to one another, leading to an increase in their aggregation and the formation of a network of bonds between the particles through liquid bridge contacts. Such changes in the electrorheological fluid structure result in a significant increase in the mechanical strength of the structures that arise when an electric field is applied, and an increase in the observed electrorheological effect in comparison with the unmodified titanium dioxide.", "title": "" }, { "docid": "5e962b323daa883da2ae8416d1fc10fa", "text": "Network security analysis and ensemble data visualization are two active research areas. Although they are treated as separate domains, they share many common challenges and characteristics. Both focus on scalability, time-dependent data analytics, and exploration of patterns and unusual behaviors in large datasets. These overlaps provide an opportunity to apply ensemble visualization research to improve network security analysis. To study this goal, we propose methods to interpret network security alerts and flow traffic as ensemble members. We can then apply ensemble visualization techniques in a network analysis environment to produce a network ensemble visualization system. Including ensemble representations provide new, in-depth insights into relationships between alerts and flow traffic. Analysts can cluster traffic with similar behavior and identify traffic with unusual patterns, something that is difficult to achieve with high-level overviews of large network datasets. Furthermore, our ensemble approach facilitates analysis of relationships between alerts and flow traffic, improves scalability, maintains accessibility and configurability, and is designed to fit our analysts' working environment, mental models, and problem solving strategies.", "title": "" }, { "docid": "9422f8c85859aca10e7d2a673b0377ba", "text": "Many adolescents are experiencing a reduction in sleep as a consequence of a variety of behavioral factors (e.g., academic workload, social and employment opportunities), even though scientific evidence suggests that the biological need for sleep increases during maturation. Consequently, the ability to effectively interact with peers while learning and processing novel information may be diminished in many sleepdeprived adolescents. Furthermore, sleep deprivation may account for reductions in cognitive efficiency in many children and adolescents with special education needs. In response to recognition of this potential problem by parents, educators, and scientists, some school districts have implemented delayed bus schedules and school start times to allow for increased sleep duration for high school students, in an effort to increase academic performance and decrease behavioral problems. The long-term effects of this change are yet to be determined; however, preliminary studies suggest that the short-term impact on learning and behavior has been beneficial. Thus, many parents, teachers, and scientists are supporting further consideration of this information to formulate policies that may maximize learning and developmental opportunities for children. Although changing school start times may be an effective method to combat sleep deprivation in most adolescents, some adolescents experience sleep deprivation and consequent diminished daytime performance because of common underlying sleep disorders (e.g., asthma or sleep apnea). In such cases, surgical, pharmaceutical, or respiratory therapy, or a combination of the three, interventions are required to restore normal sleep and daytime performance.", "title": "" }, { "docid": "1efee2d22c2f982ba94d874e061adc7d", "text": "A PWM plus phase-shift control bidirectional DC-DC converter is proposed. In this converter, PWM control and phase-shift control are combined to reduce current stress and conducting loss, and to expand ZVS range. The operation principle and analysis of the converter are explained, and ZVS condition is derived. A prototype of PWM plus phase-shift bidirectional DC-DC converter is built to verify analysis.", "title": "" }, { "docid": "a4268c77c3f51ca8d05fa0d108682883", "text": "In this paper, we propose a locality-constrained and sparsity-encouraged manifold fitting approach, aiming at capturing the locally sparse manifold structure into neighborhood graph construction by exploiting a principled optimization model. The proposed model formulates neighborhood graph construction as a sparse coding problem with the locality constraint, therefore achieving simultaneous neighbor selection and edge weight optimization. The core idea underlying our model is to perform a sparse manifold fitting task for each data point so that close-by points lying on the same local manifold are automatically chosen to connect and meanwhile the connection weights are acquired by simple geometric reconstruction. We term the novel neighborhood graph generated by our proposed optimization model M-Fitted Graph since such a graph stems from sparse manifold fitting. To evaluate the robustness and effectiveness of M-fitted graphs, we leverage graph-based semisupervised learning as the testbed. Extensive experiments carried out on six benchmark datasets validate that the proposed M-fitted graph is superior to state-of-the-art neighborhood graphs in terms of classification accuracy using popular graph-based semi-supervised learning methods.", "title": "" }, { "docid": "ecb146ae27419d9ca1911dc4f13214c1", "text": "In this paper, a simple mix integer programming for distribution center location is proposed. Based on this simple model, we introduce two important factors, transport mode and carbon emission, and extend it a model to describe the location problem for green supply chain. Sequently, IBM Watson implosion technologh (WIT) tool was introduced to describe them and solve them. By changing the price of crude oil, we illustrate the its impact on distribution center locations and transportation mode option for green supply chain. From the cases studies, we have known that, as the crude oil price increasing, the profits of the whole supply chain will decrease, carbon emission will also decrease to some degree, while the number of opened distribution center will increase.", "title": "" }, { "docid": "98b0ce9e943ab1a22c4168ba1c79ceb6", "text": "Along with rapid advancement of power semiconductors, voltage multipliers have introduced new series of pulsed power generators. In this paper, current topologies of capacitor-diode voltage multipliers (CDVM) are investigated. Alternative structures for voltage multiplier based on power electronics switches are presented in high voltage pulsed power supplies application. The new topology is able to generate the desired high voltage output without increasing the voltage rating of semiconductor devices as well as capacitors. Finally, a comparative analysis is carried out between different CDVM topologies. Experimental and simulation results are presented to verify the analysis.", "title": "" }, { "docid": "8d1cff18cfad392ef70e364d6d150c9d", "text": "Preventing feature co-adaptation by encouraging independent contributions from different features often improves classification and regression performance. Dropout training (Hinton et al., 2012) does this by randomly dropping out (zeroing) hidden units and input features during training of neural networks. However, repeatedly sampling a random subset of input features makes training much slower. Based on an examination of the implied objective function of dropout training, we show how to do fast dropout training by sampling from or integrating a Gaussian approximation, instead of doing Monte Carlo optimization of this objective. This approximation, justified by the central limit theorem and empirical evidence, gives an order of magnitude speedup and more stability. We show how to do fast dropout training for classification, regression, and multilayer neural networks. Beyond dropout, our technique is extended to integrate out other types of noise and small image transformations.", "title": "" }, { "docid": "077acc2eb4823f55bf1b7a923a31f9df", "text": "This paper presents a learning-based steganalysis/detection method to attack spatial domain least significant bit (LSB) matching steganography in grayscale images, which is the antetype of many sophisticated steganographic methods. We model the message embedded by LSB matching as the independent noise to the image, and theoretically prove that LSB matching smoothes the histogram of multi-order differences. Because of the dependency among neighboring pixels, histogram of low order differences can be approximated by Laplace distribution. The smoothness caused by LSB matching is especially apparent at the peak of the histogram. Consequently, the low order differences of image pixels are calculated. The co-occurrence matrix is utilized to model the differences with the small absolute value in order to extract features. Finally, support vector machine classifiers are trained with the features so as to identify a test image either an original or a stego image. The proposed method is evaluated by LSB matching and its improved version “Hugo”. In addition, the proposed method is compared with state-of-the-art steganalytic methods. The experimental results demonstrate the reliability of the new detector. Copyright © 2013 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "25822c79792325b86a90a477b6e988a1", "text": "As the social networking sites get more popular, spammers target these sites to spread spam posts. Twitter is one of the most popular online social networking sites where users communicate and interact on various topics. Most of the current spam filtering methods in Twitter focus on detecting the spammers and blocking them. However, spammers can create a new account and start posting new spam tweets again. So there is a need for robust spam detection techniques to detect the spam at tweet level. These types of techniques can prevent the spam in real time. To detect the spam at tweet level, often features are defined, and appropriate machine learning algorithms are applied in the literature. Recently, deep learning methods are showing fruitful results on several natural language processing tasks. We want to use the potential benefits of these two types of methods for our problem. Toward this, we propose an ensemble approach for spam detection at tweet level. We develop various deep learning models based on convolutional neural networks (CNNs). Five CNNs and one feature-based model are used in the ensemble. Each CNN uses different word embeddings (Glove, Word2vec) to train the model. The feature-based model uses content-based, user-based, and n-gram features. Our approach combines both deep learning and traditional feature-based models using a multilayer neural network which acts as a meta-classifier. We evaluate our method on two data sets, one data set is balanced, and another one is imbalanced. The experimental results show that our proposed method outperforms the existing methods.", "title": "" }, { "docid": "8b998b9f8ea6cfe5f80a5b3a1b87f807", "text": "We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2.", "title": "" }, { "docid": "3f07c471245b2e8cc369bc591a035201", "text": "Test automation is a widely-used approach to reduce the cost of manual software testing. However, if it is not planned or conducted properly, automated testing would not necessarily be more cost effective than manual testing. Deciding what parts of a given System Under Test (SUT) should be tested in an automated fashion and what parts should remain manual is a frequently-asked and challenging question for practitioner testers. In this study, we propose a search-based approach for deciding what parts of a given SUT should be tested automatically to gain the highest Return On Investment (ROI). This work is the first systematic approach for this problem, and significance of our approach is that it considers automation in the entire testing process (i.e., from test-case design, to test scripting, to test execution, and test-result evaluation). The proposed approach has been applied in an industrial setting in the context of a software product used in the oil and gas industry in Canada. Among the results of the case study is that, when planned and conducted properly using our decision-support approach, test automation provides the highest ROI. In this study, we show that if automation decision is taken effectively, test-case design, test execution, and test evaluation can result in about 307%, 675%, and 41% ROI in 10 rounds of using automated test suites.", "title": "" }, { "docid": "e28bbc2f66ee827c37a7696061d6e861", "text": "This paper presents a new class of multilevel inverters based on a multilevel dc link (MLDCL) and a bridge inverter to reduce the number of switches, clamping diodes, or capacitors. An MLDCL can be a diode-clamped phase leg, a flying-capacitor phase leg, or cascaded half-bridge cells with each cell having its own dc source. A multilevel voltage-source inverter can be formed by connecting one of the MLDCLs with a single-phase bridge inverter. The MLDCL provides a dc voltage with the shape of a staircase approximating the rectified shape of a commanded sinusoidal wave, with or without pulsewidth modulation, to the bridge inverter, which in turn alternates the polarity to produce an ac voltage. Compared with the cascaded H-bridge, diode-clamped, and flying-capacitor multilevel inverters, the MLDCL inverters can significantly reduce the switch count as well as the number of gate drivers as the number of voltage levels increases. For a given number of voltage levels m, the required number of active switches is 2/spl times/(m-1) for the existing multilevel inverters but is m+3 for the MLDCL inverters. Simulation and experimental results are included to verify the operating principles of the MLDCL inverters.", "title": "" }, { "docid": "b5de2615e93f2a7fb1523e9c6fbec4d6", "text": "The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a prediction of the development trend of image segmentation with the combination of these algorithms.", "title": "" }, { "docid": "040b56db2f85ad43ed9f3f9adbbd5a71", "text": "This study examined the relations between source credibility of eWOM (electronic word of mouth), perceived risk and food products customer's information adoption mediated by argument quality and information usefulness. eWOM has been commonly used to refer the customers during decision-making process for food commodities. Based on this study, we used Elaboration Likelihood Model of information adoption presented by Sussman and Siegal (2003) to check the willingness to buy. Non-probability purposive samples of 300 active participants were taken through questionnaire from several regions of the Republic of China and analyzed the data through structural equation modeling (SEM) accordingly. We discussed that whether eWOM source credibility and perceived risk would impact the degree of information adoption through argument quality and information usefulness. It reveals that eWOM has positively influenced on perceived risk by source credibility to the extent of information adoption and, for this, customers use eWOM for the reduction of the potential hazards when decision making. Companies can make their marketing strategies according to their target towards loyal clients' needs through online foodproduct forums review sites. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "96763245ab037e57abb3546aa12bc4fb", "text": "This paper seeks understanding the user behavior in a social network created essentially by video interactions. We present a characterization of a social network created by the video interactions among users on YouTube, a popular social networking video sharing system. Our results uncover typical user behavioral patterns as well as show evidences of anti-social behavior such as self-promotion and other types of content pollution.", "title": "" }, { "docid": "0b01870332dd93897fbcecb9254c40b9", "text": "Computer-aided detection or decision support systems aim to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. Commonly such methods proceed in two steps: selection of candidate regions for malignancy, and later classification as either malignant or not. In this study, we present a candidate detection method based on deep learning to automatically detect and additionally segment soft tissue lesions in DM. A database of DM exams (mostly bilateral and two views) was collected from our institutional archive. In total, 7196 DM exams (28294 DM images) acquired with systems from three different vendors (General Electric, Siemens, Hologic) were collected, of which 2883 contained malignant lesions verified with histopathology. Data was randomly split on an exam level into training (50%), validation (10%) and testing (40%) of deep neural network with u-net architecture. The u-net classifies the image but also provides lesion segmentation. Free receiver operating characteristic (FROC) analysis was used to evaluate the model, on an image and on an exam level. On an image level, a maximum sensitivity of 0.94 at 7.93 false positives (FP) per image was achieved. Similarly, per exam a maximum sensitivity of 0.98 at 7.81 FP per image was achieved. In conclusion, the method could be used as a candidate selection model with high accuracy and with the additional information of lesion segmentation.", "title": "" }, { "docid": "20705a14783c89ac38693b2202363c1f", "text": "This paper analyzes the effect of employee recognition, pay, and benefits on job satisfaction. In this cross-sectional study, survey responses from university students in the U.S. (n = 457), Malaysia (n = 347) and Vietnam (n = 391) were analyzed. Employee recognition, pay, and benefits were found to have a significant impact on job satisfaction, regardless of home country income level (high, middle or low income) and culture (collectivist or individualist). However, the effect of benefits on job satisfaction was significantly more important for U.S. respondents than for respondents from Malaysia and Vietnam. The authors conclude that both financial and nonfinancial rewards have a role in influencing job satisfaction, which ultimately impacts employee performance. Theoretical and practical implications for developing effective recruitment and retention policies for employees are also discussed.", "title": "" }, { "docid": "688933de8076e6fd37dde125a35c8e34", "text": "This article examines the role of cognitive, metacognitive, and motivational skills in problem solving. Cognitive skills include instructional objectives, components in a learning hierarchy, and components in information processing. Metacognitive skills include strategies for reading comprehension, writing, and mathematics. Motivational skills include motivation based on interest, self-efficacy, and attributions. All three kinds of skills are required for successful problem solving in academic settings.", "title": "" }, { "docid": "ba6f0206decf2b9bde415ffdfcd32eb9", "text": "Broad host-range mini-Tn7 vectors facilitate integration of single-copy genes into bacterial chromosomes at a neutral, naturally evolved site. Here we present a protocol for employing the mini-Tn7 system in bacteria with single attTn7 sites, using the example Pseudomonas aeruginosa. The procedure involves, first, cloning of the genes of interest into an appropriate mini-Tn7 vector; second, co-transfer of the recombinant mini-Tn7 vector and a helper plasmid encoding the Tn7 site-specific transposition pathway into P. aeruginosa by either transformation or conjugation, followed by selection of insertion-containing strains; third, PCR verification of mini-Tn7 insertions; and last, optional Flp-mediated excision of the antibiotic-resistance selection marker present on the chromosomally integrated mini-Tn7 element. From start to verification of the insertion events, the procedure takes as little as 4 d and is very efficient, yielding several thousand transformants per microgram of input DNA or conjugation mixture. In contrast to existing chromosome integration systems, which are mostly based on species-specific phage or more-or-less randomly integrating transposons, the mini-Tn7 system is characterized by its ready adaptability to various bacterial hosts, its site specificity and its efficiency. Vectors have been developed for gene complementation, construction of gene fusions, regulated gene expression and reporter gene tagging.", "title": "" } ]
scidocsrr
fd2ffeff3b8903155bed9834b4f66c1b
Drones for smart cities: Issues in cybersecurity, privacy, and public safety
[ { "docid": "d90efd08169f350d336afcbea291306c", "text": "This paper describes a multi-UAV distributed decisional architecture developed in the framework of the AWARE Project together with a set of tests with real Unmanned Aerial Vehicles (UAVs) and Wireless Sensor Networks (WSNs) to validate this approach in disaster management and civil security applications. The paper presents the different components of the AWARE platform and the scenario in which the multi-UAV missions were carried out. The missions described in this paper include surveillance with multiple UAVs, sensor deployment and fire threat confirmation. In order to avoid redundancies, instead of describing the operation of the full architecture for every mission, only non-overlapping aspects are highlighted in each one. Key issues in multi-UAV systems such as distributed task allocation, conflict resolution and plan refining are solved in the execution of the missions.", "title": "" } ]
[ { "docid": "893fe4d696f782dadb7be2b2db40550f", "text": "Compared to the categorical approach that represents affective states as several discrete classes (e.g., positive and negative), the dimensional approach represents affective states as continuous numerical values on multiple dimensions, such as the valence-arousal (VA) space, thus allowing for more fine-grained sentiment analysis. In building dimensional sentiment applications, affective lexicons with valence-arousal ratings are useful resources but are still very rare. Therefore, this study proposes a weighted graph model that considers both the relations of multiple nodes and their similarities as weights to automatically determine the VA ratings of affective words. Experiments on both English and Chinese affective lexicons show that the proposed method yielded a smaller error rate on VA prediction than the linear regression, kernel method, and pagerank algorithm used in previous studies.", "title": "" }, { "docid": "e5d523d8a1f584421dab2eeb269cd303", "text": "In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.", "title": "" }, { "docid": "7c1146ddc6e0904e0b30266b164e91f7", "text": "The number of digital images that needs to be acquired, analyzed, classified, stored and retrieved in the medical centers is exponentially growing with the advances in medical imaging technology. Accordingly, medical image classification and retrieval has become a popular topic in the recent years. Despite many projects focusing on this problem, proposed solutions are still far from being sufficiently accurate for real-life implementations. Interpreting medical image classification and retrieval as a multi-class classification task, in this work, we investigate the performance of five different feature types in a SVM-based learning framework for classification of human body X-Ray images into classes corresponding to body parts. Our comprehensive experiments show that four conventional feature types provide performances comparable to the literature with low per-class accuracies, whereas local binary patterns produce not only very good global accuracy but also good class-specific accuracies with respect to the features used in the literature.", "title": "" }, { "docid": "1f2832276b346316b15fe05d8593217c", "text": "This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools.", "title": "" }, { "docid": "450f13659ece54bee1b4fe61cc335eb2", "text": "Though considerable effort has recently been devoted to hardware realization of one-dimensional chaotic systems, the influence of implementation inaccuracies is often underestimated and limited to non-idealities in the non-linear map. Here we investigate the consequences of sample-and-hold errors. Two degrees of freedom in the design space are considered: the choice of the map and the sample-and-hold architecture. Current-mode systems based on Bernoulli Shift, on Tent Map and on Tailed Tent Map are taken into account and coupled with an order-one model of sample-and-hold to ascertain error causes and suggest implementation improvements. key words: chaotic systems, analog circuits, sample-and-hold errors", "title": "" }, { "docid": "7f6e966f3f924e18cb3be0ae618309e6", "text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)", "title": "" }, { "docid": "144bb8e869671843cb5d8053e2ee861d", "text": "We investigate whether physicians' financial incentives influence health care supply, technology diffusion, and resulting patient outcomes. In 1997, Medicare consolidated the geographic regions across which it adjusts physician payments, generating area-specific price shocks. Areas with higher payment shocks experience significant increases in health care supply. On average, a 2 percent increase in payment rates leads to a 3 percent increase in care provision. Elective procedures such as cataract surgery respond much more strongly than less discretionary services. Non-radiologists expand their provision of MRIs, suggesting effects on technology adoption. We estimate economically small health impacts, albeit with limited precision.", "title": "" }, { "docid": "9948738a487ed899ec50ac292e1f9c6d", "text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.", "title": "" }, { "docid": "f7efa63a206b5c9bc02c1b2ae37a73ee", "text": "acknowledges support by the National Science Foundation CAREER award under contract number IIS-9875746. The authors are grateful to the senior editor, associate editor and three anonymous reviewers for constructive critique and suggestions for improvements. Austin. His publications have appeared in several top tier journals such as American Economic Review, ABSTRACT Many traditional organizations have undertaken major initiatives to leverage the Internet to transform how they coordinate value activities with customers, suppliers and other business partners with the objective of improving firm performance. This paper addresses processes through which business value is created through such Internet-enabled value chain activities. Relying on the resource-based view (RBV) of the firm, we propose a model positing that a firm's abilities to coordinate and exploit firm resources – processes, IT and readiness of customers and suppliers – create online informational capabilities – a higher order resource – which then leads to improved operational and financial performance. The outcome of a firm's online informational capabilities is reflected in superior operational performance through customer and supplier-side digitization efforts, which reflect the extent to which transactions and external interactions occur electronically. We also hypothesize that increased customer and supplier-side digitization leads to better financial performance. The model is tested with data from over 1000 firms in manufacturing, retail and wholesale sectors. The analysis suggests that while most firms are lagging in their supplier-side initiatives relative to the customer-side, supplier-side digitization has a strong positive impact on customer-side digitization, which, in turn, leads to better financial performance. Further, both customer and supplier readiness to engage in digital interactions are shown to be as important as a firm's internal digitization initiatives, implying that a firm's transformation-related decisions should include its customers' and suppliers' resources and incentives.", "title": "" }, { "docid": "e743bfe8c4f19f1f9a233106919c99a7", "text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.", "title": "" }, { "docid": "22070b95e5eeebf17bc7019aabc5f5b0", "text": "s of the 2nd Cancer Cachexia Conference, Montreal, Canada, 26-28 September 2014 Published online 31March 2015 inWiley Online Library (wileyonlinelibrary.com) © 2015 John Wiley & Sons Ltd 1-01 Body composition and prognostication in cancer Vickie Baracos Department of Oncology, University of Alberta, Edmonton Alberta, Canada Cancer cachexia contributes to poor prognosis through progressive depletion of the body’s energy and protein reserves; research is revealing the impact of the quantitly of these reserves on survival. Our group has exploitated computed tomography (CT) images to study body composition in cancer patients. We argue that CT taken for the purposes of diagnosis and routine follow-up can be used to derive clinically useful information on skeletal muscle and fat amount and distribution. Population-based data sets have been analyzed, revealing wide variation in individual proportions of fat and muscle (Prado et al. Lancet Oncology 2008;9:629–35; Martin et al. J. Clin Oncol. 2013: 31:1539–47). Muscle loss during aging is well known and is prognostic of frailty, falls, fractures, loss of independence, increased length of hospital stay, infectious complications in hospital and mortality. Muscle depletion is not limited to people who appear underweight and it may be a hidden condition in normal weight, overweight or obese people (i.e. sarcopenic obesity). Disparate behaviour of skeletal muscle and fat was acknowledged by an international consensus of experts on cancer cachexia, defined as being characterized by loss of skeletal muscle with or without loss of fat mass. Within the large interindividual variation of body composition in cancer patients, several consistent themes are emerging. Skeletal muscle depletion is a powerful predictor of cancer related mortality as well as of severe toxicity during systemic chemotherapy. Distinct from skeletal muscle, the fat mass is an important reserve of energy. High fat mass (i.e. obesity) appears to confer a survival advantage in patients with diseases associated with wasting, including cancer, rather than a disadvantage as understood from studies of all-cause mortality. The larger energy reserve of obese persons is thought to confer this advantage. Obesity predicted higher survival especially strongly when sarcopenia is absent. To specifically understand the relationships between body composition and cancer utcomes, we have reviewed several thousand clinical CT images. We used statistical methods (i.e. optimal stratification) to define muscle mass cutpoints that relate significantly to increased mortality and evaluated them in survival models alongside conventional covariates including cancer site, stage and performance status. Muscle depletion is associated with mortality in diverse tumor groups including patients with cancers of the pancreas, lung, breast and gastrointestinal tract, liver, bladder and kidney. Cancer patients who are cachexic by conventional criteria (involuntary weight loss) and by the additional criterion of severe muscle depletion share a very poor prognosis, regardless of overall body weight. Severe muscle depletion was identified in patients with cancers of the breast, colon, lung, kidney, liver, head & neck and lymphoma and these consistently had worse toxicity resulting in dose reductions or definitive termination of therapy when treated with 5-FU, capecitabine, sorafenib, sunitinib, carboplatin, cisplatin or a regimen (5FU with epirubicin & cyclophosphamide; 5FUwith oxaliplatin or CPT 11). Reduced treatment may explain excess early mortality in patients affected by severe muscle depletion. Survival models including cachexia and body weight/composition characteristics showed excellent fit (i.e. concordance statistics >0.9) and outperformed prediction models using only conventional cancer related covariates (C-statistics 0.75-0.8). In renal cell carcinoma muscle depletion was independent of the frequently used Memorial Sloan Kettering Cancer Center prognostic score and similar results were seen for muscle depletion in lymphoma independent of the FLIPI prognostic score. 1–02 Myostatin as a marker of cachexia in gastric cancer Maurizio Muscaritoli, Zaira Aversa and Filippo Rossi Fanelli Department of Clinical Medicine Sapienza, University of Rome, Rome, Italy Myostatin, also known as growth and differentiation factor-8 (GDF-8), is a negative regulator of muscle mass, belonging to the TGF-β superfamily. Myostatin is secreted as an inactive propeptide that is cleaved to generate a mature ligand, whose activity may be regulated in vivo by association and dissociation with binding proteins, including propeptide itself as well as follistatin or related molecules. Active myostatin binds the activin type II B receptor (ActRIIB) and, to a lesser extent, the related ActRIIA, resulting in the phosphorylation and consequent recruitment of the low-affinity type I receptor ALK (activin receptor like-kinase)-4 or ALK-5. This binding induces phosphorylation and activation of the transcription factors SMAD2 and 3 [mammalian homologue of Drosophila MAD (MothersAgainst-Decapentaplegic gene)], which translocate into the nucleus and together with SMAD 4 regulate the expression of target genes. In addition, myostatin has been suggested to exert its action through different pathways, such as the extracellular signal-regulated kinase (ERK)/ mitogen activated protein kinase (MAPK) cascade. Moreover, cross-talking between myostatin pathway and the IGF-1 axis has been postulated. Inactivating mutations of myostatin gene have been found in the “double-muscled cattle phenotype” as well as in humans. Myostatin null mice are characterized by marked muscle enlargement (~100 to 200% more than controls), exhibiting both fiber hypertrophy and hyperplasia, whereas systemic administration of myostatin in adult mice induces profound muscle and fat loss. Moreover, high myostatin protein levels have been reported in conditions associated with muscle depletion, such as aging, denervation atrophy, or mechanical unloading. Results from our laboratory have shown that myostatin signaling is enhanced in skeletal muscle of tumor-bearing rats and mice. Similarly, others have shown that myostatin inhibition, either by antisense oligonucleotides or by © 2015 John Wiley & Sons Ltd Journal of Cachexia, Sarcopenia and Muscle 2015; 6: 2–31 DOI: 10.1002/jcsm.12004 administration of an Actvin Receptor II B/Fragment-crystallizable (ActRIIB/ Fc) fusion protein or ActRIIB-soluble form, preventmuscle wasting in tumorbearing mice. When myostatin signaling was studied in muscle biopsies obtained during surgical procedure from non-weight losing gastric cancer patients, we found that protein expression of bothmyostatin and phosphorylated GSK-3β were significantly increased, while phosphorylated-SMAD 2/3did not significantly changewith respect to controls. Although the reason of this result is not known at present, a possible explanation could be that myostatin increase is paralleled by a concomitant rise in the expression of follistatin, a physiological inhibitor of myostatin. This would result in a myostatin/follistatin ratio similar to controls, thereby maintaining the myostatin signaling in basal conditions. In addition, unchanged levels of pSmad 2/3, despite increased myostatin protein expression, also may reflect a modulation of other molecules acting through the activin receptor type IIB, such as activin A. Interestingly enough, we found that the expression levels of muscle myostatin mRNA are significantly reduced in gastric cancer patients. Although the reason for these apparently contradictory results is not known at present, it is conceivable that the differences may at least in part be due to posttranscriptional mechanisms, such as increased myostatin synthesis secondary to increased translational efficiency or reduced degradation of myostatin. Based on the available data, it may be concluded that myostatin signaling is perturbed in the skeletal muscle of patients with gastric cancer. Changes occur even in early disease stage and in the absence of significant weight loss, supporting the view that the molecular changes contributing to muscle wasting and cancer cachexia are operating since the early phases of cancer. Myostatin signaling is complex and may be affected by the interplay of inhibitors such as follistatin and/or other members of the TGFβ superfamily. Myostatin may represent a suitable target for future pharmacological interventions aimed at the prevention and treatment of cancer-related muscle loss. 1-03 Role of Activin A in human cancer cachexia (ACTICA study) A Loumaye, M de Barsy, M Nachit, L Frateur, P Lause, A Van Maanen, JP Thissen Cancer Center of the Cliniques Universitaires St-Luc; Radiology, Cliniques Universitaires St-Luc; Endocrinology, Diabetology and Nutrition Dept, IREC, Université Catholique de Louvain and Cliniques Universitaires St-Luc, Brussels, Belgium Cachexia is a complex metabolic syndrome associated with underlying illness, characterized by loss of skeletal muscle and not reversible by nutritional support. Recent animal observations suggest that the production of Activin A (ActA), a member of the TGFß superfamily, by some tumors might contribute to cancer cachexia. This hypothesis seems attractive since inhibitors of ActA have been developed. Nevertheless, the role of ActA in the development of cancer cachexia has never been investigated in humans. Our goal was to demonstrate the role of ActA as a mediator of the human cancer cachexia and to assess its potential use as a biomarker of cachexia. Patients with colorectal or lung cancer were prospectively evaluated. All patients had clinical, nutritional and functional assessment. The skeletal muscle mass was measured by bioimpedance (BIA) and abdomen CT-scan (CT). Blood samples were collected in standardized conditions to measure circulating levels of ActA. One-hundred fifty-two patients were recruited (59 lung", "title": "" }, { "docid": "ad80f2e78e80397bd26dac5c0500266c", "text": "The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the eq norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.", "title": "" }, { "docid": "4a4c0839d4790834047074d7f4b45cec", "text": "The growing popularity of massively accessed Web applications that store and analyze large amounts of data, being Facebook, Twitter and Google Search some prominent stores, known as NoSQL databases, has arisen. This paper reviews implementations of NoSQL databases in order to provide an understanding of current tools and their uses. First, NoSQL databases are compared with traditional RDBMS and important concepts are explained. Only databases allowing to persist data and distribute them along different computing nodes are within the scope of this review. Moreover, NoSQL databases are divided into different types: Key-Value, Wide-Column, Document-oriented and Graphoriented. In each case, a comparison of available databases is carried out based on their most important features. & 2016 Published by Elsevier Ltd.", "title": "" }, { "docid": "6fb06fff9f16024cf9ccf9a782bffecd", "text": "In this chapter, we discuss 3D compression techniques for reducing the delays in transmitting triangle meshes over the Internet. We first explain how vertex coordinates, which represent surface samples may be compressed through quantization, prediction, and entropy coding. We then describe how the connectivity, which specifies how the surface interpolates these samples, may be compressed by compactly encoding the parameters of a connectivity-graph construction process and by transmitting the vertices in the order in which they are encountered by this process. The storage of triangle meshes compressed with these techniques is usually reduced to about a byte per triangle. When the exact geometry and connectivity of the mesh are not essential, the triangulated surface may be simplified or retiled. Although simplification techniques and the progressive transmission of refinements may be used as a compression tool, we focus on recently proposed retiling techniques designed specifically to improve 3D compression. They are often able to reduce the total storage, which combines coordinates and connectivity, to half-a-bit per triangle without exceeding a mean square error of 1/10,000 of the diagonal of a box that contains the solid.", "title": "" }, { "docid": "328c1c6ed9e38a851c6e4fd3ab71c0f8", "text": "We present the MSP-IMPROV corpus, a multimodal emotional database, where the goal is to have control over lexical content and emotion while also promoting naturalness in the recordings. Studies on emotion perception often require stimuli with fixed lexical content, but that convey different emotions. These stimuli can also serve as an instrument to understand how emotion modulates speech at the phoneme level, in a manner that controls for coarticulation. Such audiovisual data are not easily available from natural recordings. A common solution is to record actors reading sentences that portray different emotions, which may not produce natural behaviors. We propose an alternative approach in which we define hypothetical scenarios for each sentence that are carefully designed to elicit a particular emotion. Two actors improvise these emotion-specific situations, leading them to utter contextualized, non-read renditions of sentences that have fixed lexical content and convey different emotions. We describe the context in which this corpus was recorded, the key features of the corpus, the areas in which this corpus can be useful, and the emotional content of the recordings. The paper also provides the performance for speech and facial emotion classifiers. The analysis brings novel classification evaluations where we study the performance in terms of inter-evaluator agreement and naturalness perception, leveraging the large size of the audiovisual database.", "title": "" }, { "docid": "10b6750b3f7a589463122b55b5776a7a", "text": "This article reviews research and interventions that have grown up around a model of psychological well-being generated more than two decades ago to address neglected aspects of positive functioning such as purposeful engagement in life, realization of personal talents and capacities, and enlightened self-knowledge. The conceptual origins of this formulation are revisited and scientific products emerging from 6 thematic areas are examined: (1) how well-being changes across adult development and later life; (2) what are the personality correlates of well-being; (3) how well-being is linked with experiences in family life; (4) how well-being relates to work and other community activities; (5) what are the connections between well-being and health, including biological risk factors, and (6) via clinical and intervention studies, how psychological well-being can be promoted for ever-greater segments of society. Together, these topics illustrate flourishing interest across diverse scientific disciplines in understanding adults as striving, meaning-making, proactive organisms who are actively negotiating the challenges of life. A take-home message is that increasing evidence supports the health protective features of psychological well-being in reducing risk for disease and promoting length of life. A recurrent and increasingly important theme is resilience - the capacity to maintain or regain well-being in the face of adversity. Implications for future research and practice are considered.", "title": "" }, { "docid": "d2feed22afd1b6702ff4a8ebe160a5d7", "text": "Contactless payment systems represent cashless payments that do not require physical contact between the devices used in consumer payment and POS terminals by the merchant. Radio frequency identification (RFID) devices can be embedded in the most different forms, as the form of cards, key rings, built into a watch, mobile phones. This type of payment supports the three largest payment system cards: Visa (Visa Contactless), MasterCard (MasterCard PayPass) and American Express (ExpressPay). All these products are compliant with international ISO 14443 standard, which provides a unique system for payment globally. Implementation of contactless payment systems are based on same infrastructure that exists for the payment cards with magnetic strips and does not require additional investments by the firm and financial institutions, other than upgrading the existing POS terminals. Technological solutions used for the implementation are solutions based on ISO 14443 standard, Sony FeliCa technology, RFID tokens and NFC (Near Field Communication) systems. This paper describes the advantages of introducing contactless payment system based on RF technology through pilot projects conducted by VISA, MasterCard and American Express Company in order to confirm in practice the applicability of this technology.", "title": "" }, { "docid": "8808c5f8ce726a9382facc63f9460e21", "text": "With the booming of deep learning in the recent decade, deep neural network has achieved state-of-art performances on many machine learning tasks and has been applied to more and more research fields. Stock market prediction is an attractive research topic since the successful prediction on the market’s future movement leads to significant profit. In this thesis, we investigate to combine the conventional stock analysis techniques with the popular deep learning together and study the impact of deep neural network on stock market prediction. Traditional short term stock market predictions are usually based on the analysis of historical market data, such as stock prices, moving averages or daily returns. Whereas financial news also contains useful information on public companies and the market. In this thesis we apply the popular word embedding methods and deep neural networks to leverage financial news to predict stock price movements in the market. Experimental results have shown that our proposed methods are simple but very effective, which can significantly improve the stock prediction accuracy on a standard financial database over the baseline system using only the historical price information.", "title": "" }, { "docid": "ddab10d66473ac7c4de26e923bf59083", "text": "Phased arrays allow electronic scanning of the antenna beam. However, these phased arrays are not widely used due to a high implementation cost. This article discusses the advantages of the RF architecture and the implementation of silicon RFICs for phased-array transmitters/receivers. In addition, this work also demonstrates how silicon RFICs can play a vital role in lowering the cost of phased arrays.", "title": "" } ]
scidocsrr
478918f9494f28a2f4defb078fd6dfb7
Better Human Computation Through Principled Voting
[ { "docid": "b786d2f98142a470c68d680ead6424ee", "text": "People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully ‘crowd-sourced’ through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.", "title": "" } ]
[ { "docid": "0cd96187b257ee09060768650432fe6d", "text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.", "title": "" }, { "docid": "821d68aef4b665a2ae754759748f6657", "text": "In recent years, consumer-centric cloud computing paradigm has emerged as the development of smart electronic devices combined with the emerging cloud computing technologies. A variety of cloud services are delivered to the consumers with the premise that an effective and efficient cloud search service is achieved. For consumers, they want to find the most relevant products or data, which is highly desirable in the \"pay-as-you use\" cloud computing paradigm. As sensitive data (such as photo albums, emails, personal health records, financial records, etc.) are encrypted before outsourcing to cloud, traditional keyword search techniques are useless. Meanwhile, existing search approaches over encrypted cloud data support only exact or fuzzy keyword search, but not semantics-based multi-keyword ranked search. Therefore, how to enable an effective searchable system with support of ranked search remains a very challenging problem. This paper proposes an effective approach to solve the problem of multi-keyword ranked search over encrypted cloud data supporting synonym queries. The main contribution of this paper is summarized in two aspects: multi-keyword ranked search to achieve more accurate search results and synonym-based search to support synonym queries. Extensive experiments on real-world dataset were performed to validate the approach, showing that the proposed solution is very effective and efficient for multikeyword ranked searching in a cloud environment.", "title": "" }, { "docid": "88a052d1e6e5d6776711b58e0711869d", "text": "We are in the midst of a revolution in military affairs (RMA) unlike any seen since the Napoleonic Age, when France transformed warfare with the concept of levŽe en masse. Chief of Naval Operations Admiral Jay Johnson has called it \"a fundamental shift from what we call platform-centric warfare to something we call network-centric warfare,\" and it will prove to be the most important RMA in the past 200 years.", "title": "" }, { "docid": "94d7144fb4d3e1ebf9ad5e52fd7b5918", "text": "Regression testing is a crucial part of software development. It checks that software changes do not break existing functionality. An important assumption of regression testing is that test outcomes are deterministic: an unmodified test is expected to either always pass or always fail for the same code under test. Unfortunately, in practice, some tests often called flaky tests—have non-deterministic outcomes. Such tests undermine the regression testing as they make it difficult to rely on test results. We present the first extensive study of flaky tests. We study in detail a total of 201 commits that likely fix flaky tests in 51 open-source projects. We classify the most common root causes of flaky tests, identify approaches that could manifest flaky behavior, and describe common strategies that developers use to fix flaky tests. We believe that our insights and implications can help guide future research on the important topic of (avoiding) flaky tests.", "title": "" }, { "docid": "9423dcfc04f57be48adddc88e40f1963", "text": "Presynaptic Ca(V)2.2 (N-type) calcium channels are subject to modulation by interaction with syntaxin 1 and by a syntaxin 1-sensitive Galpha(O) G-protein pathway. We used biochemical analysis of neuronal tissue lysates and a new quantitative test of colocalization by intensity correlation analysis at the giant calyx-type presynaptic terminal of the chick ciliary ganglion to explore the association of Ca(V)2.2 with syntaxin 1 and Galpha(O). Ca(V)2.2 could be localized by immunocytochemistry (antibody Ab571) in puncta on the release site aspect of the presynaptic terminal and close to synaptic vesicle clouds. Syntaxin 1 coimmunoprecipitated with Ca(V)2.2 from chick brain and chick ciliary ganglia and was widely distributed on the presynaptic terminal membrane. A fraction of the total syntaxin 1 colocalized with the Ca(V)2.2 puncta, whereas the bulk colocalized with MUNC18-1. Galpha(O,) whether in its trimeric or monomeric state, did not coimmunoprecipitate with Ca(V)2.2, MUNC18-1, or syntaxin 1. However, the G-protein exhibited a punctate staining on the calyx membrane with an intensity that varied in synchrony with that for both Ca channels and syntaxin 1 but only weakly with MUNC18-1. Thus, syntaxin 1 appears to be a component of two separate complexes at the presynaptic terminal, a minor one at the transmitter release site with Ca(V)2.2 and Galpha(O), as well as in large clusters remote from the release site with MUNC18-1. These syntaxin 1 protein complexes may play distinct roles in presynaptic biology.", "title": "" }, { "docid": "404a06e80d5d5e40a78621b7c8a9dad9", "text": "Automated recommendations have become a ubiquitous part of today’s online user experience. These systems point us to additional items to purchase in online shops, they make suggestions to us on movies to watch, or recommend us people to connect with on social websites. In many of today’s applications, however, the only way for users to interact with the system is to inspect the recommended items. Often, no mechanisms are implemented for users to give the system feedback on the recommendations or to explicitly specify preferences, which can limit the potential overall value of the system for its users.\n Academic research in recommender systems is largely focused on algorithmic approaches for item selection and ranking. Nonetheless, over the years a variety of proposals were made on how to design more interactive recommenders. This work provides a comprehensive overview on the existing literature on user interaction aspects in recommender systems. We cover existing approaches for preference elicitation and result presentation, as well as proposals that consider recommendation as an interactive process. Throughout the work, we furthermore discuss examples of real-world systems and outline possible directions for future works.", "title": "" }, { "docid": "e0f7f087a4d8a33c1260d4ed0558edc3", "text": "In this review paper, it is intended to summarize and compare the methods of automatic detection of microcalcifications in digitized mammograms used in various stages of the Computer Aided Detection systems (CAD). In particular, the pre processing and enhancement, bilateral subtraction techniques, segmentation algorithms, feature extraction, selection and classification, classifiers, Receiver Operating Characteristic (ROC); Free-response Receiver Operating Characteristic (FROC) analysis and their performances are studied and compared.", "title": "" }, { "docid": "d21a5cfa20b1b0cc667243f1df47229d", "text": "The Segment Maxima Method for calculating gamut boundary descriptors of both colour reproduction media and colour images is introduced. Methods for determining the gamut boundary along a given line of mapping used by gamut mapping algorithms are then described, whereby these methods use the Gamut Boundary Descriptor obtained using the Segment Maxima Method. Throughout the article, the focus is both on colour reproduction media and colour images as well as on the suitability of the methods for use in gamut mapping. © 2000 John Wiley & Sons, Inc. Col Res Appl, 25, 394–401, 2000", "title": "" }, { "docid": "245de72c0f333f4814990926e08c13e9", "text": "Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.", "title": "" }, { "docid": "e870f2fe9a26b241bdeca882b6186169", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.", "title": "" }, { "docid": "aa9cf52d7a544cdd3f910981fb23e402", "text": "Nowadays Big Data are becoming a popular topic and a comparatively new technological concept focused on many different disciplines like environmental science, social media and networks, industry and healthcare. Data volumes are on an upward trajectory associated with increased data velocity, and variety. Furthermore, they are needed to develop effective solutions to support intelligent, proactive and predictive processes. In this paper we exploit Big Data concepts for environmental sciences and water resources. The aim of this article is to present the concept and architecture of our Big Data Open Platform used for supporting Water Resources Management. This Platform has been designed to provide effective tools that allow water system managers to solve complex water resources systems, water modeling issues and help in decision making. The Platform brings a variety of information technology tools including stochastic aspects, high performance computing, simulation models, hydraulic and hydrological models, grid computing, decision tools, Big Data analysis system, communication and diffusion system, database management, geographic information system (GIS) and Knowledge based expert system. The operators' objectives of this Big Data Open Platform are to solve and discuss water resources problems that are featured by a huge volume of collected, analyzed and visualized data, to analyze the heterogeneity of data resulting from various sources including structured, unstructured and semi-structured data, also to prevent and/or avoid a catastrophic event related to floods and/or droughts, through hydraulic infrastructures designed for such purposes or strategic planning. This first paper will focus on the first part developed and based on J2EE platform and specifically the hypsometrical approach considered as a decision tool allowing users to compare the effects of different current and future management scenarios and make choice to preserve the environment and natural resources.", "title": "" }, { "docid": "55285f99e1783bcba47ab41e56171026", "text": "Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.", "title": "" }, { "docid": "1682c1be8397a4d8e859e76cdc849740", "text": "With the advent of RFLPs, genetic linkage maps are now being assembled for a number of organisms including both inbred experimental populations such as maize and outbred natural populations such as humans. Accurate construction of such genetic maps requires multipoint linkage analysis of particular types of pedigrees. We describe here a computer package, called MAPMAKER, designed specifically for this purpose. The program uses an efficient algorithm that allows simultaneous multipoint analysis of any number of loci. MAPMAKER also includes an interactive command language that makes it easy for a geneticist to explore linkage data. MAPMAKER has been applied to the construction of linkage maps in a number of organisms, including the human and several plants, and we outline the mapping strategies that have been used.", "title": "" }, { "docid": "ee06f781207415db38de63f89ca198c4", "text": "State-of-the-art hearing prostheses are equipped with acoustic noise reduction algorithms to improve speech intelligibility. Currently, one of the major challenges is to perform acoustic noise reduction in so-called cocktail party scenarios with multiple speakers, in particular because it is difficult-if not impossible-for the algorithm to determine which are the target speaker(s) that should be enhanced, and which speaker(s) should be treated as interfering sources. Recently, it has been shown that electroencephalography (EEG) can be used to perform auditory attention detection, i.e., to detect to which speaker a subject is attending based on recordings of neural activity. In this paper, we combine such an EEG-based auditory attention detection (AAD) paradigm with an acoustic noise reduction algorithm based on the multi-channel Wiener filter (MWF), leading to a neuro-steered MWF. In particular, we analyze how the AAD accuracy affects the noise suppression performance of an adaptive MWF in a sliding-window implementation, where the user switches his attention between two speakers.", "title": "" }, { "docid": "759f5b6d1889e09cfc78b2539283fa38", "text": "CONTEXT\nVentilator management protocols shorten the time required to wean adult patients from mechanical ventilation. The efficacy of such weaning protocols among children has not been studied.\n\n\nOBJECTIVE\nTo evaluate whether weaning protocols are superior to standard care (no defined protocol) for infants and children with acute illnesses requiring mechanical ventilator support and whether a volume support weaning protocol using continuous automated adjustment of pressure support by the ventilator (ie, VSV) is superior to manual adjustment of pressure support by clinicians (ie, PSV).\n\n\nDESIGN AND SETTING\nRandomized controlled trial conducted in the pediatric intensive care units of 10 children's hospitals across North America from November 1999 through April 2001.\n\n\nPATIENTS\nOne hundred eighty-two spontaneously breathing children (<18 years old) who had been receiving ventilator support for more than 24 hours and who failed a test for extubation readiness on minimal pressure support.\n\n\nINTERVENTIONS\nPatients were randomized to a PSV protocol (n = 62), VSV protocol (n = 60), or no protocol (n = 60).\n\n\nMAIN OUTCOME MEASURES\nDuration of weaning time (from randomization to successful extubation); extubation failure (any invasive or noninvasive ventilator support within 48 hours of extubation).\n\n\nRESULTS\nExtubation failure rates were not significantly different for PSV (15%), VSV (24%), and no protocol (17%) (P =.44). Among weaning successes, median duration of weaning was not significantly different for PSV (1.6 days), VSV (1.8 days), and no protocol (2.0 days) (P =.75). Male children more frequently failed extubation (odds ratio, 7.86; 95% confidence interval, 2.36-26.2; P<.001). Increased sedative use in the first 24 hours of weaning predicted extubation failure (P =.04) and, among extubation successes, duration of weaning (P<.001).\n\n\nCONCLUSIONS\nIn contrast with adult patients, the majority of children are weaned from mechanical ventilator support in 2 days or less. Weaning protocols did not significantly shorten this brief duration of weaning.", "title": "" }, { "docid": "650d7361f87373410c833344ce0f2134", "text": "Child maltreatment (CM) is associated with poor long-term health outcomes. However, knowledge about CM prevalence and related consequences is scarce among adults in South European countries. We examined the self-reported prevalence of five different forms of CM in a community sample of 1,200 Portuguese adults; we compared the results with similar samples from three other countries, using the same instrument. We also explored the relationship between CM and psychological symptoms. Cross-sectional data using the Childhood Trauma Questionnaire-Short Form and the Brief Symptom Inventory were analyzed. Moderate or severe CM exposure was self-reported by 14.7 % of the sample, and 67 % was exposed to more than one form of CM. Emotional neglect was the most endorsed experience, with women reporting greater emotional abuse and men reporting larger physical abuse. Physical and sexual abuse was less self-reported by Portuguese than by American or German subjects. CM exposure predicted 12.8 % of the psychological distress. Emotional abuse was the strongest predictor for psychological symptoms, namely for paranoid ideation, depression, and interpersonal sensitivity. Emotional abuse overlapped with the exposure to all other CM forms, and interacted with physical abuse, physical neglect, and emotional neglect to predict psychological distress. Low exposure to emotional abuse was directly associated with the effects of physical abuse, physical neglect, and emotional neglect to predict adult psychological distress. Verbal abuse experiences were frequently reported and had the highest correlations with adult psychological distress. Our results underline the potential hurtful effects of child emotional abuse among Portuguese adults in the community. They also highlight the need to improve prevention and intervention actions to reduce exposure and consequences of CM, particularly emotional abuse.", "title": "" }, { "docid": "a1c859b44c46ebf4d2d413f4303cb4f7", "text": "We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker andWeir (1994) apart from weakly equivalent formalisms such as Tree-Adjoining Grammar (TAG), for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.", "title": "" }, { "docid": "9be14309092d4b974ca9a82d39b7f6ae", "text": "Extreme Learning Machine (ELM) and its variants have been widely used for many applications due to its fast convergence and good generalization performance. Though the distributed ELM based on MapReduce framework can handle very large scale training dataset in big data applications, how to cope with its rapidly updating is still a challenging task. Therefore, in this paper, a novel Elastic Extreme Learning Machine based on MapReduce framework, named Elastic ELM (ELM), is proposed to cover the shortage of ELM whose learning ability is weak to the updated large-scale training dataset. Firstly, after analyzing the property of ELM adequately, it can be found out that its most computation-expensive part, matrix multiplication, can be incrementally, decrementally and correctionally calculated. Next, the Elastic ELM based on MapReduce framework is developed, which first calculates the intermediate matrix multiplications of the updated training data subset, and then update the matrix multiplications by modifying the old matrix multiplications with the intermediate ones. Then, the corresponding new output weight vector can be obtained with centralized computing using the update the matrix multiplications. Therefore, the efficient learning of rapidly updated massive training dataset can be realized effectively. Finally, we conduct extensive experiments on synthetic data to verify the effectiveness and efficiency of our proposed ELM in learning massive rapidly updated training dataset with various experimental settings. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8236e558ae45744c7c62a7e172689e88", "text": "Given a table where rows correspond to records and columns correspond to attributes, we want to find a small number of patterns that succinctly summarize the dataset. For example, given a set of patient records with several attributes each, how can we find (a) that the \"most representative\" pattern is, say, (male, adult, *), followed by (*, child, low-cholesterol), etc? We propose TSum, a method that provides a sequence of patterns ordered by their \"representativeness.\" It can decide both which these patterns are, as well as how many are necessary to properly summarize the data. Our main contribution is formulating a general framework, TSum, using compression principles. TSum can easily accommodate different optimization strategies for selecting and refining patterns. The discovered patterns can be used to both represent the data efficiently, as well as interpret it quickly. Extensive experiments demonstrate the effectiveness and intuitiveness of our discovered patterns.", "title": "" } ]
scidocsrr
0c429f8275fda8b676dae33a377e41fc
How analysts cognitively “connect the dots”
[ { "docid": "34a7d306a788ab925db8d0afe4c21c5a", "text": "The Sandbox is a flexible and expressive thinking environment that supports both ad-hoc and more formal analytical tasks. It is the evidence marshalling and sensemaking component for the analytical software environment called nSpace. This paper presents innovative Sandbox human information interaction capabilities and the rationale underlying them including direct observations of analysis work as well as structured interviews. Key capabilities for the Sandbox include “put-this-there” cognition, automatic process model templates, gestures for the fluid expression of thought, assertions with evidence and scalability mechanisms to support larger analysis tasks. The Sandbox integrates advanced computational linguistic functions using a Web Services interface and protocol. An independent third party evaluation experiment with the Sandbox has been completed. The experiment showed that analyst subjects using the Sandbox did higher quality analysis in less time than with standard tools. Usability test results indicated the analysts became proficient in using the Sandbox with three hours of training.", "title": "" } ]
[ { "docid": "93e2a4357573c446b2747f7b21d9d443", "text": "Social Network Systems pioneer a paradigm of access control that is distinct from traditional approaches to access control. Gates coined the term Relationship-Based Access Control (ReBAC) to refer to this paradigm. ReBAC is characterized by the explicit tracking of interpersonal relationships between users, and the expression of access control policies in terms of these relationships. This work explores what it takes to widen the applicability of ReBAC to application domains other than social computing. To this end, we formulate an archetypical ReBAC model to capture the essence of the paradigm, that is, authorization decisions are based on the relationship between the resource owner and the resource accessor in a social network maintained by the protection system. A novelty of the model is that it captures the contextual nature of relationships. We devise a policy language, based on modal logic, for composing access control policies that support delegation of trust. We use a case study in the domain of Electronic Health Records to demonstrate the utility of our model and its policy language. This work provides initial evidence to the feasibility and utility of ReBAC as a general-purpose paradigm of access control.", "title": "" }, { "docid": "086f9cbed93553ca00b2afeff1cb8508", "text": "Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. A wide spectrum of applications can benefit from the trajectory data mining. Bringing unprecedented opportunities, large-scale trajectory data also pose great challenges. In this paper, we survey various applications of trajectory data mining, e.g., path discovery, location prediction, movement behavior analysis, and so on. Furthermore, this paper reviews an extensive collection of existing trajectory data mining techniques and discusses them in a framework of trajectory data mining. This framework and the survey can be used as a guideline for designing future trajectory data mining solutions.", "title": "" }, { "docid": "17f88b8c51ee5a7fbbed43d25ce237fb", "text": "Al-Quran knowledge representations involved classification of Al-Quran verses for providing better understanding of the readers. In the current era of social media challenges, the representation of knowledge must be understood by human and computer in order to ensure the correctness of Al-Quran semantics are persevered. Current approaches used conventional methods such as taxonomy, hierarchy or tree structure, which only provides a concept definition without linked to other sources of knowledge explanation. This research aims to develop the Al-Quran Ontology by using theme-based classification approach. The ontology model for Al-Quran is developed based on the Al-Quran knowledge theme defined in Syammil Al-Quran Miracle the Reference. The theme-based ontology approach has shown that the Al-Quran knowledge can be classified and presented systematically. This will encourage the development of applications for Al-Quran readers. Moreover, the ontology structure that representing the theme concepts in Al-Quran was reviewed and validated by the domain experts in Al-Quran knowledge.", "title": "" }, { "docid": "530cb20db77c76d229fd90e73b3a65ca", "text": "While automatic response generation for building chatbot s ys ems has drawn a lot of attention recently, there is limited understanding on when we need to c onsider the linguistic context of an input text in the generation process. The task is challeng ing, as messages in a conversational environment are short and informal, and evidence that can in dicate a message is context dependent is scarce. After a study of social conversation data cra wled from the web, we observed that some characteristics estimated from the responses of messa ges are discriminative for identifying context dependent messages. With the characteristics as we ak supervision, we propose using a Long Short Term Memory (LSTM) network to learn a classifier. O ur method carries out text representation and classifier learning in a unified framewor k. Experimental results show that the proposed method can significantly outperform baseline meth ods on accuracy of classification.", "title": "" }, { "docid": "a1b7f477c339f30587a2f767327b4b41", "text": "Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisciplinary nature of the game development processes that combine sound, art, control systems, artificial intelligence (AI), and human factors, makes the software game development practice different from traditional software development. However, the underline software engineering techniques help game development to achieve maintainability, flexibility, lower effort and cost, and better design. The purpose of this study is to assesses the state of the art research on the game development software engineering process and highlight areas that need further consideration by researchers. In the study, we used a systematic literature review methodology based on well-known digital libraries. The largest number of studies have been reported in the production phase of the game development software engineering process life cycle, followed by the pre-production phase. By contrast, the post-production phase has received much less research activity than the pre-production and production phases. The results of this study suggest that the game development software engineering process has many aspects that need further attention from researchers; that especially includes the postproduction phase.", "title": "" }, { "docid": "04e4c1b80bcf1a93cafefa73563ea4d3", "text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.", "title": "" }, { "docid": "afe00d3f8364159d77611582c611e981", "text": "Today, in addition to traditional mobile services, there are new ones already being used, thanks to the advances in 3G-related technologies. Our work contributed to the emerging body of research by integrating TAM and Diffusion Theory. Based on a sample of 542 Dutch consumers, we found that traditional antecedents of behavioral intention, ease of use and perceived usefulness, can be linked to diffusion-related variables, such as social influence and perceived benefits (flexibility and status). 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "29199ac45d4aa8035fd03e675406c2cb", "text": "This work presents an autonomous mobile robot in order to cover an unknown terrain “randomly”, namely entirely, unpredictably and evenly. This aim is very important, especially in military missions, such as the surveillance of terrains, the terrain exploration for explosives and the patrolling for intrusion in military facilities. The “heart” of the proposed robot is a chaotic motion controller, which is based on a chaotic true random bit generator. This generator has been implemented with a microcontroller, which converts the produced chaotic bit sequence, to the robot's motion. Experimental results confirm that this approach, with an appropriate sensor for obstacle avoidance, can obtain very satisfactory results in regard to the fast scanning of the robot’s workspace with unpredictable way. Key-Words: Autonomous mobile robot, terrain coverage, microcontroller, random bit generator, nonlinear system, chaos, Logistic map.", "title": "" }, { "docid": "e18dc3045b138032bcca21696ba12ecf", "text": "There has been tremendous progress in algorithmic methods for computing driving directions on road networks. Most of that work focuses on time-independent route planning, where it is assumed that the cost on each arc is constant per query. In practice, the current traffic situation significantly influences the travel time on large parts of the road network, and it changes over the day. One can distinguish between traffic congestion that can be predicted using historical traffic data, and congestion due to unpredictable events, e. g., accidents. In this work, we study the dynamic and time-dependent route planning problem, which takes both prediction (based on historical data) and live traffic into account. To this end, we propose a practical algorithm that, while robust to user preferences, is able to integrate global changes of the time-dependent metric (e. g., due to traffic updates or user restrictions) faster than previous approaches, while allowing subsequent queries that enable interactive applications.", "title": "" }, { "docid": "058a4f93fb5c24c0c9967fca277ee178", "text": "We report on the SUM project which applies automatic summarisation techniques to the legal domain. We describe our methodology whereby sentences from the text are classified according to their rhetorical role in order that particular types of sentence can be extracted to form a summary. We describe some experiments with judgments of the House of Lords: we have performed automatic linguistic annotation of a small sample set and then hand-annotated the sentences in the set in order to explore the relationship between linguistic features and argumentative roles. We use state-of-the-art NLP techniques to perform the linguistic annotation using XML-based tools and a combination of rule-based and statistical methods. We focus here on the predictive capacity of tense and aspect features for a classifier.", "title": "" }, { "docid": "5d377a17d3444d6137be582cbbc6c1db", "text": "Next generation malware will by be characterized by the intense use of polymorphic and metamorphic techniques aimed at circumventing the current malware detectors, based on pattern matching. In order to deal with this new kind of threat novel techniques have to be devised for the realization of malware detectors. Recent papers started to address such issue and this paper represents a further contribution in such a field. More precisely in this paper we propose a strategy for the detection of malicious codes that adopt the most evolved self-mutation techniques; we also provide experimental data supporting the validity of", "title": "" }, { "docid": "cf50580f55443f3d9f24a48cf059c2c2", "text": "Health care is one of the greatest concerns in India. While, those living in cities and big towns have access to high end health services, the millions of people living in rural India, particularly in the remote parts of the country face problems of inadequate facilities and poor access to healthcare. Many experts, including researchers, policy makers and practitioners identified that, there is a big gap in the knowledge about innovations in public and private health financing and delivery. The inefficiencies and inequities in the public health care access in India have pushed forward the need for creative thinking and innovative solutions to strengthen the same. The problems existing in the health care scenario provides apparent calls for the need to change the existing structure of the present health care services by applying big data analytics. This paper identifies the massive shortage of proper health care facilities and addresses how to provide greater access to primary health care services in rural India. Further, it also addresses the critical computing and analytical ability of Big Data in processing huge volumes of transactional data in real time situations to turn the dream of Svasth Bharath (Healthy India) into reality. The objective of this paper is to present the reforms in the health care sector and boosts the discussions on how government can harness innovations in the big data analytics to improve the rural health care system. Keywords— Big Data Analytics, HealthCare, Rural Health Care, e-Health Care, Tele Medicine, Svasth Bharath.", "title": "" }, { "docid": "4768001167cefad7b277e3b77de648bb", "text": "MicroRNAs (miRNAs) regulate gene expression at the posttranscriptional level and are therefore important cellular components. As is true for protein-coding genes, the transcription of miRNAs is regulated by transcription factors (TFs), an important class of gene regulators that act at the transcriptional level. The correct regulation of miRNAs by TFs is critical, and increasing evidence indicates that aberrant regulation of miRNAs by TFs can cause phenotypic variations and diseases. Therefore, a TF-miRNA regulation database would be helpful for understanding the mechanisms by which TFs regulate miRNAs and understanding their contribution to diseases. In this study, we manually surveyed approximately 5000 reports in the literature and identified 243 TF-miRNA regulatory relationships, which were supported experimentally from 86 publications. We used these data to build a TF-miRNA regulatory database (TransmiR, http://cmbi.bjmu.edu.cn/transmir), which contains 82 TFs and 100 miRNAs with 243 regulatory pairs between TFs and miRNAs. In addition, we included references to the published literature (PubMed ID) information about the organism in which the relationship was found, whether the TFs and miRNAs are involved with tumors, miRNA function annotation and miRNA-associated disease annotation. TransmiR provides a user-friendly interface by which interested parties can easily retrieve TF-miRNA regulatory pairs by searching for either a miRNA or a TF.", "title": "" }, { "docid": "9bcf45278e391a6ab9a0b33e93d82ea9", "text": "Non-orthogonal multiple access (NOMA) is a potential enabler for the development of 5G and beyond wireless networks. By allowing multiple users to share the same time and frequency, NOMA can scale up the number of served users, increase spectral efficiency, and improve user-fairness compared to existing orthogonal multiple access (OMA) techniques. While single-cell NOMA has drawn significant attention recently, much less attention has been given to multi-cell NOMA. This article discusses the opportunities and challenges of NOMA in a multi-cell environment. As the density of base stations and devices increases, inter-cell interference becomes a major obstacle in multi-cell networks. As such, identifying techniques that combine interference management approaches with NOMA is of great significance. After discussing the theory behind NOMA, this article provides an overview of the current literature and discusses key implementation and research challenges, with an emphasis on multi-cell NOMA.", "title": "" }, { "docid": "15aa0333268dd812546d1cc9c24103b8", "text": "Relation extraction is the process of identifying instances of specified types of semantic relations in text; relation type extension involves extending a relation extraction system to recognize a new type of relation. We present LGCo-Testing, an active learning system for relation type extension based on local and global views of relation instances. Locally, we extract features from the sentence that contains the instance. Globally, we measure the distributional similarity between instances from a 2 billion token corpus. Evaluation on the ACE 2004 corpus shows that LGCo-Testing can reduce annotation cost by 97% while maintaining the performance level of supervised learning.", "title": "" }, { "docid": "4d0185efbe22d65e5bb8bbf0a31fe51c", "text": "Determining the polarity of a sentimentbearing expression requires more than a simple bag-of-words approach. In particular, words or constituents within the expression can interact with each other to yield a particular overall polarity. In this paper, we view such subsentential interactions in light of compositional semantics, and present a novel learningbased approach that incorporates structural inference motivated by compositional semantics into the learning procedure. Our experiments show that (1) simple heuristics based on compositional semantics can perform better than learning-based methods that do not incorporate compositional semantics (accuracy of 89.7% vs. 89.1%), but (2) a method that integrates compositional semantics into learning performs better than all other alternatives (90.7%). We also find that “contentword negators”, not widely employed in previous work, play an important role in determining expression-level polarity. Finally, in contrast to conventional wisdom, we find that expression-level classification accuracy uniformly decreases as additional, potentially disambiguating, context is considered.", "title": "" }, { "docid": "cfdee8bd0802872f4bd216df226f9c35", "text": "Single-unit recording studies in the macaque have carefully documented the modulatory effects of attention on the response properties of visual cortical neurons. Attention produces qualitatively different effects on firing rate, depending on whether a stimulus appears alone or accompanied by distracters. Studies of contrast gain control in anesthetized mammals have found parallel patterns of results when the luminance contrast of a stimulus increases. This finding suggests that attention has co-opted the circuits that mediate contrast gain control and that it operates by increasing the effective contrast of the attended stimulus. Consistent with this idea, microstimulation of the frontal eye fields, one of several areas that control the allocation of spatial attention, induces spatially local increases in sensitivity both at the behavioral level and among neurons in area V4, where endogenously generated attention increases contrast sensitivity. Studies in the slice have begun to explain how modulatory signals might cause such increases in sensitivity.", "title": "" }, { "docid": "e44636035306e122bf50115552516f53", "text": "Texts and dialogues often express information indirectly. For instance, speakers’ answers to yes/no questions do not always straightforwardly convey a ‘yes’ or ‘no’ answer. The intended reply is clear in some cases (Was it good? It was great!) but uncertain in others (Was it acceptable? It was unprecedented.). In this paper, we present methods for interpreting the answers to questions like these which involve scalar modifiers. We show how to ground scalar modifier meaning based on data collected from the Web. We learn scales between modifiers and infer the extent to which a given answer conveys ‘yes’ or ‘no’. To evaluate the methods, we collected examples of question–answer pairs involving scalar modifiers from CNN transcripts and the Dialog Act corpus and use response distributions from Mechanical Turk workers to assess the degree to which each answer conveys ‘yes’ or ‘no’. Our experimental results closely match the Turkers’ response data, demonstrating that meanings can be learned from Web data and that such meanings can drive pragmatic inference.", "title": "" }, { "docid": "92d04ad5a9fa32c2ad91003213b1b86d", "text": "You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to...", "title": "" } ]
scidocsrr
eaad96e94e1da4f41dc7ff94702cfe36
Go-Explore: a New Approach for Hard-Exploration Problems
[ { "docid": "d272cf01340c8dcc3c24651eaf876926", "text": "We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma’s Revenge. Instead of imitating human demonstrations, as proposed in other recent works, our approach is to maximize rewards directly. Our agent is trained using off-the-shelf reinforcement learning, but starts every episode by resetting to a state from a demonstration. By starting from such demonstration states, the agent requires much less exploration to learn a game compared to when it starts from the beginning of the game at every episode. We analyze reinforcement learning for tasks with sparse rewards in a simple toy environment, where we show that the run-time of standard RL methods scales exponentially in the number of states between rewards. Our method reduces this to quadratic scaling, opening up many tasks that were previously infeasible. We then apply our method to Montezuma’s Revenge, for which we present a trained agent achieving a high-score of 74,500, better than any previously published result.", "title": "" }, { "docid": "ba1368e4acc52395a8e9c5d479d4fe8f", "text": "This talk will present an overview of our recent research on distributional reinforcement learning. Our starting point is our recent ICML paper, in which we argued for the fundamental importance of the value distribution: the distribution of random returns received by a reinforcement learning agent. This is in contrast to the common approach, which models the expectation of this return, or value. Back then, we were able to design a new algorithm that learns the value distribution through a TD-like bootstrap process and achieved state-of-the-art performance on games from the Arcade Learning Environment (ALE). However, this left open the question as to why the distributional approach should perform better at all. We’ve since delved deeper into what makes distributional RL work: first by improving the original using quantile regression, which directly minimizes the Wasserstein metric; and second by unearthing surprising connections between the original C51 algorithm and the distant cousin of the Wasserstein metric, the Cramer distance.", "title": "" }, { "docid": "4fc6ac1b376c965d824b9f8eb52c4b50", "text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.", "title": "" }, { "docid": "8913c543d350ff147b9f023729f4aec3", "text": "The reality gap, which often makes controllers evolved in simulation inefficient once transferred onto the physical robot, remains a critical issue in evolutionary robotics (ER). We hypothesize that this gap highlights a conflict between the efficiency of the solutions in simulation and their transferability from simulation to reality: the most efficient solutions in simulation often exploit badly modeled phenomena to achieve high fitness values with unrealistic behaviors. This hypothesis leads to the transferability approach, a multiobjective formulation of ER in which two main objectives are optimized via a Pareto-based multiobjective evolutionary algorithm: 1) the fitness; and 2) the transferability, estimated by a simulation-to-reality (STR) disparity measure. To evaluate this second objective, a surrogate model of the exact STR disparity is built during the optimization. This transferability approach has been compared to two reality-based optimization methods, a noise-based approach inspired from Jakobi's minimal simulation methodology and a local search approach. It has been validated on two robotic applications: 1) a navigation task with an e-puck robot; and 2) a walking task with a 8-DOF quadrupedal robot. For both experimental setups, our approach successfully finds efficient and well-transferable controllers only with about ten experiments on the physical robot.", "title": "" } ]
[ { "docid": "a0251ae10bfabd188766aa2453b8cebb", "text": "This paper presents the development of automatic vehicle plate detection system using image processing technique. The famous name for this system is Automatic Number Plate Recognition (ANPR). Automatic vehicle plate detection system is commonly used in field of safety and security systems especially in car parking area. Beside the safety aspect, this system is applied to monitor road traffic such as the speed of vehicle and identification of the vehicle's owner. This system is designed to assist the authorities in identifying the stolen vehicle not only for car but motorcycle as well. In this system, the Optical Character Recognition (OCR) technique was the prominent technique employed by researchers to analyse image of vehicle plate. The limitation of this technique was the incapability of the technique to convert text or data accurately. Besides, the characters, the background and the size of the vehicle plate are varied from one country to other country. Hence, this project proposes a combination of image processing technique and OCR to obtain the accurate vehicle plate recognition for vehicle in Malaysia. The outcome of this study is the system capable to detect characters and numbers of vehicle plate in different backgrounds (black and white) accurately. This study also involves the development of Graphical User Interface (GUI) to ease user in recognizing the characters and numbers in the vehicle or license plates.", "title": "" }, { "docid": "e882a33ff28c37b379c22d73e16147b3", "text": "Combining ant colony optimization (ACO) and multiobjective evolutionary algorithm based on decomposition (MOEA/D), this paper proposes a multiobjective evolutionary algorithm, MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single objective optimization problems. Each ant (i.e. agent) is responsible for solving one subproblem. All the ants are divided into a few groups and each ant has several neighboring ants. An ant group maintains a pheromone matrix and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group’s pheromone matrix, its own heuristic information matrix and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two set of test problems. On the multiobjective 0-1 knapsack problem, MOEA/D-ACO outperforms MOEA/D-GA on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the “learning while optimizing” principle, is effective in improving multiobjective optimization algorithms.", "title": "" }, { "docid": "7a5370c855d37a105e7f7b1f3a1f0d95", "text": "Just recently much Information Systems (IS) research focuses on master data management (MDM) which promises to increase an organization's overall core data quality. Above any doubt, however, MDM initiatives confront organizations with multi-faceted and complex challenges that call for a more strategic approach to MDM. In this paper we introduce a framework for approaching MDM projects that has been developed in the course of a design science research study. The framework distinguishes four major strategies of MDM project initiations all featuring their specific assets and drawbacks. The usefulness of our artifact is illustrated in a short case narrative.", "title": "" }, { "docid": "5c754c2fe1536a4e44800eaf7cb516e5", "text": "This article proposes an original method for grading the colours between different images or shots. The first stage of the method is to find a one-to-one colour mapping that transfers the palette of an example target picture to the original picture. This is performed using an original and parameter free algorithm that is able to transform any N -dimensional probability density function into another one. The proposed algorithm is iterative, non-linear and has a low computational cost. Applying the colour mapping on the original picture allows reproducing the same ‘feel’ as the target picture, but can also increase the graininess of the original picture, especially if the colour dynamic of the two pictures is very different. The second stage of the method is to reduce this grain artefact through an efficient post-processing algorithm that intends to preserve the gradient field of the original picture.", "title": "" }, { "docid": "c3e63d82514b9e9b1cc172ea34f7a53e", "text": "Deep Learning is one of the next big things in Recommendation Systems technology. The past few years have seen the tremendous success of deep neural networks in a number of complex machine learning tasks such as computer vision, natural language processing and speech recognition. After its relatively slow uptake by the recommender systems community, deep learning for recommender systems became widely popular in 2016.\n We believe that a tutorial on the topic of deep learning will do its share to further popularize the topic. Notable recent application areas are music recommendation, news recommendation, and session-based recommendation. The aim of the tutorial is to encourage the application of Deep Learning techniques in Recommender Systems, to further promote research in deep learning methods for Recommender Systems.", "title": "" }, { "docid": "71e65d1ae7ff899467cc93b3858992b8", "text": "This paper describes a semi-automated process, framework and tools for harvesting, assessing, improving and maintaining high-quality linked-data. The framework, known as DaCura1, provides dataset curators, who may not be knowledge engineers, with tools to collect and curate evolving linked data datasets that maintain quality over time. The framework encompasses a novel process, workflow and architecture. A working implementation has been produced and applied firstly to the publication of an existing social-sciences dataset, then to the harvesting and curation of a related dataset from an unstructured data-source. The framework’s performance is evaluated using data quality measures that have been developed to measure existing published datasets. An analysis of the framework against these dimensions demonstrates that it addresses a broad range of real-world data quality concerns. Experimental results quantify the impact of the DaCura process and tools on data quality through an assessment framework and methodology which combines automated and human data quality controls. Improving Curated WebData Quality with Structured Harvesting and Assessment", "title": "" }, { "docid": "ec533cd5d21dca090a90311dc75cccdd", "text": "A challenging problem in software engineering is to check if a program has an execution path satisfying a regular property. We propose a novel method of dynamic symbolic execution (DSE) to automatically find a path of a program satisfying a regular property. What makes our method distinct is when exploring the path space, DSE is guided by the synergy of static analysis and dynamic analysis to find a target path as soon as possible. We have implemented our guided DSE method for Java programs based on JPF and WALA, and applied it to 13 real-world open source Java programs, a total of 225K lines of code, for extensive experiments. The results show the effectiveness, efficiency, feasibility and scalability of the method. Compared with the pure DSE on the time to find the first target path, the average speedup of the guided DSE is more than 258X when analyzing the programs that have more than 100 paths.", "title": "" }, { "docid": "e2b3001513059a02cf053cadab6abb85", "text": "Data mining is the process of discovering meaningful new correlation, patterns and trends by sifting through large amounts of data, using pattern recognition technologies as well as statistical and mathematical techniques. Cluster analysis is often used as one of the major data analysis technique widely applied for many practical applications in emerging areas of data mining. Two of the most delegated, partition based clustering algorithms namely k-Means and Fuzzy C-Means are analyzed in this research work. These algorithms are implemented by means of practical approach to analyze its performance, based on their computational time. The telecommunication data is the source data for this analysis. The connection oriented broad band data is used to find the performance of the chosen algorithms. The distance (Euclidian distance) between the server locations and their connections are rearranged after processing the data. The computational complexity (execution time) of each algorithm is analyzed and the results are compared with one another. By comparing the result of this practical approach, it was found that the results obtained are more accurate, easy to understand and above all the time taken to process the data was substantially high in Fuzzy C-Means algorithm than the k-Means. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "944fba026f94099b89e087b7983d38c0", "text": "Signed social networks have become increasingly important in recent years because of the ability to model trust-based relationships in review sites like Slashdot, Epinions, and Wikipedia. As a result, many traditional network mining problems have been re-visited in the context of networks in which signs are associated with the links. Examples of such problems include community detection, link prediction, and low rank approximation. In this paper, we will examine the problem of ranking nodes in signed networks. In particular, we will design a ranking model, which has a clear physical interpretation in terms of the sign of the edges in the network. Specifically, we propose the Troll-Trust model that models the probability of trustworthiness of individual data sources as an interpretation for the underlying ranking values. We will show the advantages of this approach over a variety of baselines.", "title": "" }, { "docid": "d003deabc7748959e8c5cc220b243e70", "text": "INTRODUCTION In Britain today, children by the age of 10 years have regular access to an average of five different screens at home. In addition to the main family television, for example, many very young children have their own bedroom TV along with portable handheld computer game consoles (eg, Nintendo, Playstation, Xbox), smartphone with games, internet and video, a family computer and a laptop and/or a tablet computer (eg, iPad). Children routinely engage in two or more forms of screen viewing at the same time, such as TV and laptop. Viewing is starting earlier in life. Nearly one in three American infants has a TV in their bedroom, and almost half of all infants watch TV or DVDs for nearly 2 h/day. Across the industrialised world, watching screen media is the main pastime of children. Over the course of childhood, children spend more time watching TV than they spend in school. When including computer games, internet and DVDs, by the age of seven years, a child born today will have spent one full year of 24 h days watching screen media. By the age of 18 years, the average European child will have spent 3 years of 24 h days watching screen media; at this rate, by the age of 80 years, they will have spent 17.6 years glued to media screens. Yet, irrespective of the content or educational value of what is being viewed, the sheer amount of average daily screen time (ST) during discretionary hours after school is increasingly being considered an independent risk factor for disease, and is recognised as such by other governments and medical bodies but not, however, in Britain or in most of the EU. To date, views of the British and European medical establishments on increasingly high levels of child ST remain conspicuous by their absence. This paper will highlight the dramatic increase in the time children today spend watching screen media. It will provide a brief overview of some specific health and well-being concerns of current viewing levels, explain why screen viewing is distinct from other forms of sedentary behaviour, and point to the potential public health benefits of a reduction in ST. It is proposed that Britain and Europe’s medical establishments now offer guidance on the average number of hours per day children spend viewing screen media, and the age at which they start.", "title": "" }, { "docid": "ea96aa3b9f162c69c738be2b190db9e0", "text": "Batteries are currently being developed to power an increasingly diverse range of applications, from cars to microchips. How can scientists achieve the performance that each application demands? How will batteries be able to power the many other portable devices that will no doubt be developed in the coming years? And how can batteries become a sustainable technology for the future? The technological revolution of the past few centuries has been fuelled mainly by variations of the combustion reaction, the fire that marked the dawn of humanity. But this has come at a price: the resulting emissions of carbon dioxide have driven global climate change. For the sake of future generations, we urgently need to reconsider how we use energy in everything from barbecues to jet aeroplanes and power stations. If a new energy economy is to emerge, it must be based on a cheap and sustainable energy supply. One of the most flagrantly wasteful activities is travel, and here battery devices can potentially provide a solution, especially as they can be used to store energy from sustainable sources such as the wind and solar power. Because batteries are inherently simple in concept, it is surprising that their development has progressed much more slowly than other areas of electronics. As a result, they are often seen as being the heaviest, costliest and least-green components of any electronic device. It was the lack of good batteries that slowed down the deployment of electric cars and wireless communication, which date from at least 1899 and 1920, respectively (Fig. 1). The slow progress is due to the lack of suitable electrode materials and electrolytes, together with difficulties in mastering the interfaces between them. All batteries are composed of two electrodes connected by an ionically conductive material called an electrolyte. The two electrodes have different chemical potentials, dictated by the chemistry that occurs at each. When these electrodes are connected by means of an external device, electrons spontaneously flow from the more negative to the more positive potential. Ions are transported through the electrolyte, maintaining the charge balance, and electrical energy can be tapped by the external circuit. In secondary, or rechargeable, batteries, a larger voltage applied in the opposite direction can cause the battery to recharge. The amount of electrical energy per mass or volume that a battery can deliver is a function of the cell's voltage and capacity, which are dependent on the …", "title": "" }, { "docid": "5e376e42186e894ca78e8d1c50d33911", "text": "We consider a family of chaotic skew tent maps. The skew tent map is a two-parameter, piecewise-linear, weakly-unimodal, map of the interval Fa;b. We show that Fa;b is Markov for a dense set of parameters in the chaotic region, and we exactly ®nd the probability density function (pdf), for any of these maps. It is well known (Boyarsky A, G ora P. Laws of chaos: invariant measures and dynamical systems in one dimension. Boston: Birkhauser, 1997), that when a sequence of transformations has a uniform limit F, and the corresponding sequence of invariant pdfs has a weak limit, then that invariant pdf must be F invariant. However, we show in the case of a family of skew tent maps that not only does a suitable sequence of convergent sequence exist, but they can be constructed entirely within the family of skew tent maps. Furthermore, such a sequence can be found amongst the set of Markov transformations, for which pdfs are easily and exactly calculated. We then apply these results to exactly integrate Lyapunov exponents. Ó 2000 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "a5296748b0a93696e7b15f7db9d68384", "text": "Microscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data.", "title": "" }, { "docid": "bfde0c836406a25a08b7c95b330aaafa", "text": "The concept of agile process models has gained great popularity in software (SW) development community in past few years. Agile models promote fast development. This property has certain drawbacks, such as poor documentation and bad quality. Fast development promotes use of agile process models in small-scale projects. This paper modifies and evaluates extreme programming (XP) process model and proposes a novel adaptive process mode based on these modifications. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4bf253b2349978d17fd9c2400df61d21", "text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.", "title": "" }, { "docid": "9ce14872fe5556573b9e17c9ec141e6c", "text": "This paper presents an integrated design method for pedestrian avoidance by considering the interaction between trajectory planning and trajectory tracking. This method aims to reduce the need for control calibration by properly considering plant uncertainties and tire force limits at the design stage. Two phases of pedestrian avoidance—trajectory planning and trajectory tracking—are designed in an integrated manner. The available tire force is distributed to the feedforward part, which is used to generate the nominal trajectory in trajectory planning phase, and to the feedback part, which is used for trajectory tracking. The trajectory planning problem is solved not by searching through a continuous spectrum of steering/braking actions, but by examining a limited set of “motion primitives,” or motion templates that can be adopted in sequence to avoid the pedestrian. An emergency rapid random tree (RRT) methodology is proposed to quickly identify a feasible solution. Subsequently, in order to guarantee accuracy and provide safety margin in trajectory tracking with presence of model uncertainties and exogenous disturbance, a simplified LQR-based funnel algorithm is proposed. Simulation results provide insight into how pedestrian collisions can be avoided under given initial vehicle and pedestrian states.", "title": "" }, { "docid": "92d5ebd49670681a5d43ba90731ae013", "text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.", "title": "" }, { "docid": "4ffa0e5a75eff20ae22f41067d22ee73", "text": "In digital advertising, advertisers want to reach the right audience over media channels such as display, mobile, video, or social at the appropriate cost. The right audience for an advertiser consists of existing customers as well as valuable prospects, those that can potentially be turned into future customers. Identifying valuable prospects is called the audience extension problem because advertisers find new customers by extending the desirable criteria for their starting point, which is their existing audience or customers. The complexity of the audience extension problem stems from the difficulty of defining desirable criteria objectively, the number of desirable criteria (such as similarity, diversity, performance) to simultaneously satisfy, and the expected runtime (a few minutes) to find a solution over billions of cookie-based users. In this paper, we formally define the audience extension problem, propose an algorithm that extends a given audience set efficiently under multiple desirable criteria, and experimentally validate its performance. Instead of iterating over individual users, the algorithm takes in Boolean rules that define the seed audience and returns a new set of Boolean rules that corresponds to the extended audience that satisfy the multiple criteria.", "title": "" }, { "docid": "116e77b20db84c72364723a4c22cdb0a", "text": "While many organizations turn to human computation labor markets for jobs with black-or-white solutions, there is vast potential in asking these workers for original thought and innovation.", "title": "" }, { "docid": "cbe1c53bc389fb9a40a79e480ad1607a", "text": "The GuideCane is a novel device designed to help blind or visually impaired users navigate safely and quickly among obstacles and other hazards. During operation, the user pushes the lightweight GuideCane forward. When the GuideCane’s ultrasonic sensors detect an obstacle, the embedded computer determines a suitable direction of motion that steers the GuideCane and the user around it. The steering action results in a very noticeable force felt in the handle, which easily guides the user without any conscious effort on his/her part.", "title": "" } ]
scidocsrr
3d7e14291d4b3780f7ab429129058124
Sparse Phase Retrieval via Truncated Amplitude Flow
[ { "docid": "51d0ebd5fb727524810646c23487bbb1", "text": "We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, this problem is ill-posed. Therefore, prior information on the signal is needed in order to enable its recovery. In this work we consider the case in which the signal is known to be sparse, i.e., it consists of a small number of nonzero elements in an appropriate basis. We propose a fast local search method for recovering a sparse signal from measurements of its Fourier transform (or other linear transform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAse Retrieval. Our algorithm does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that GESPAR is fast and more accurate than existing techniques in a variety of settings.", "title": "" }, { "docid": "5d527ad4493860a8d96283a5c58c3979", "text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.", "title": "" } ]
[ { "docid": "8698c9a18ed9173b132d122237294963", "text": "We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation. As the name suggests, DFI relies only on simple linear interpolation of deep convolutional features from pre-trained convnets. We show that despite its simplicity, DFI can perform high-level semantic transformations like make older/younger, make bespectacled, add smile, among others, surprisingly well&#x2013;sometimes even matching or outperforming the state-of-the-art. This is particularly unexpected as DFI requires no specialized network architecture or even any deep network to be trained for these tasks. DFI therefore can be used as a new baseline to evaluate more complex algorithms and provides a practical answer to the question of which image transformation tasks are still challenging after the advent of deep learning.", "title": "" }, { "docid": "d390b0e5b1892297af37659fb92c03b5", "text": "Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.", "title": "" }, { "docid": "7e6fafe512ccb0a9760fab1b14aa374f", "text": "Studying execution of concurrent real-time online systems, to identify far-reaching and hard to reproduce latency and performance problems, requires a mechanism able to cope with voluminous information extracted from execution traces. Furthermore, the workload must not be disturbed by tracing, thereby causing the problematic behavior to become unreproducible.\n In order to satisfy this low-disturbance constraint, we created the LTTng kernel tracer. It is designed to enable safe and race-free attachment of probes virtually anywhere in the operating system, including sites executed in non-maskable interrupt context.\n In addition to being reentrant with respect to all kernel execution contexts, LTTng offers good performance and scalability, mainly due to its use of per-CPU data structures, local atomic operations as main buffer synchronization primitive, and RCU (Read-Copy Update) mechanism to control tracing.\n Given that kernel infrastructure used by the tracer could lead to infinite recursion if traced, and typically requires non-atomic synchronization, this paper proposes an asynchronous mechanism to inform the kernel that a buffer is ready to read. This ensures that tracing sites do not require any kernel primitive, and therefore protects from infinite recursion.\n This paper presents the core of LTTng's buffering algorithms and measures its performance.", "title": "" }, { "docid": "ac07682e0fa700a8f0c9df025feb2c53", "text": "Today's web applications run inside a complex browser environment that is buggy, ill-specified, and implemented in different ways by different browsers. Thus, web applications that desire robustness must use a variety of conditional code paths and ugly hacks to deal with the vagaries of their runtime. Our new exokernel browser, called Atlantis, solves this problem by providing pages with an extensible execution environment. Atlantis defines a narrow API for basic services like collecting user input, exchanging network data, and rendering images. By composing these primitives, web pages can define custom, high-level execution environments. Thus, an application which does not want a dependence on Atlantis'predefined web stack can selectively redefine components of that stack, or define markup formats and scripting languages that look nothing like the current browser runtime. Unlike prior microkernel browsers like OP, and unlike compile-to-JavaScript frameworks like GWT, Atlantis is the first browsing system to truly minimize a web page's dependence on black box browser code. This makes it much easier to develop robust, secure web applications.", "title": "" }, { "docid": "652912f2cc5b2e93525cb25aec8d7c8d", "text": "This paper presents a slotted-rectangular patch antenna with proximity-coupled feed operated at dual band of millimeter-wave (mmV) frequencies, 28GHz and 38GHz. The antenna was built in multilayer substrate construct by 10-layers Low temperature Co-fiber Ceramic (LTCC) with 5 mils thickness each. The slotted-patch and thick substrate are configured to enhance the bandwidth and obtain a good result in gain as well. The bandwidth using are 21.3% at 28GHz and 13.0% at 38GHz with the direction gains are 8.63dBi and 8.62dBi at 28GHz and 38GHz respectively.", "title": "" }, { "docid": "65a8c1faa262cd428045854ffcae3fae", "text": "Extracting named entities in text and linking extracted names to a given knowledge base are fundamental tasks in applications for text understanding. Existing systems typically run a named entity recognition (NER) model to extract entity names first, then run an entity linking model to link extracted names to a knowledge base. NER and linking models are usually trained separately, and the mutual dependency between the two tasks is ignored. We propose JERL, Joint Entity Recognition and Linking, to jointly model NER and linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance of the other. To the best of our knowledge, JERL is the first model to jointly optimize NER and linking tasks together completely. In experiments on the CoNLL’03/AIDA data set, JERL outperforms state-of-art NER and linking systems, and we find improvements of 0.4% absolute F1 for NER on CoNLL’03, and 0.36% absolute precision@1 for linking on AIDA.", "title": "" }, { "docid": "554d234697cd98bf790444fe630c179b", "text": "This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.", "title": "" }, { "docid": "233cb91d9d3b6aefbeb065f6ad6d8e80", "text": "This thesis addresses the problem of verifying the geographic locations of Internet clients. First, we demonstrate how current state-of-the-art delay-based geolocation techniques are susceptible to evasion through delay manipulations, which involve both increasing and decreasing the Internet delays that are observed between a client and a remote measuring party. We find that delay-based techniques generally lack appropriate mechanisms to measure delays in an integrity-preserving manner. We then discuss different strategies enabling an adversary to benefit from being able to manipulate the delays. Upon analyzing the effect of these strategies on three representative delay-based techniques, we found that the strategies combined with the ability of full delay manipulation can allow an adversary to (fraudulently) control the location returned by those geolocation techniques accurately. We then propose Client Presence Verification (CPV) as a delay-based technique to verify an assertion about a client’s physical presence in a prescribed geographic region. Three verifiers geographically encapsulating a client’s asserted location are used to corroborate that assertion by measuring the delays between themselves and the client. CPV infers geographic distances from these delays and thus, using the smaller of the forward and reverse one-way delay between each verifier and the client is expected to result in a more accurate distance inference than using the conventional round-trip times. Accordingly, we devise a novel protocol for accurate one-way delay measurements between the client and the three verifiers to be used by CPV, taking into account that the client could manipulate the measurements to defeat the verification process. We evaluate CPV through extensive real-world experiments with legitimate clients (those truly present at where they asserted to be) modeled to use both wired and wireless access networks. Wired evaluation is done using the PlanetLab testbed, during which we examine various factors affecting CPV’s efficacy, such as the client’s geographical nearness to the verifiers. For wireless evaluation, we leverage the Internet delay information collected for wired clients from PlanetLab, and model additional delays representing the last-mile wireless link. The additional delays were generated following wireless delay distribution models studied in the literature. Again, we examine various factors that affect CPV’s efficacy, including the number of devices actively competing for the wireless media in the vicinity of a wireless legitimate CPV client. Finally, we reinforce CPV against a (hypothetical) middlebox that an adversary specifically customizes to defeat CPV (i.e., assuming an adversary that is aware of how CPV operates). We postulate that public middlebox service providers (e.g., in the form of Virtual Private Networks) would be motivated to defeat CPV if it is to be widely adopted in practice. To that end, we propose to use a Proof-ofWork mechanism that allows CPV to impose constraints, which effectively limit the number of clients (now adversaries) simultaneously colluding with that middlebox; beyond that number, CPV detects the middlebox.", "title": "" }, { "docid": "7bde5b5c0980eb2be0827cd29803e542", "text": "Image authentication verifies the originality of an image by detecting malicious manipulations. This goal is different from that of image watermarking which embeds into the image a signature surviving most manipulations. Most existing methods for image authentication treat all types of manipulation equally (i.e., as unacceptable). However, some applications demand techniques that can distinguish acceptable manipulations (e.g., compression) from malicious ones. In this paper, we describe an effective technique for image authentication, which can prevent malicious manipulations but allow JPEG lossy compression. The authentication signature is based on the invariance of the relationship between the DCT coefficients at the same position in separate blocks of an image. This relationship will be preserved when these coefficients are quantized in a JPEG compression process. Our proposed method can distinguish malicious manipulations from JPEG lossy compression regardless of how high the compression ratio is. We also show that, in different practical cases, the design of the authenticator depends on the number of recompression times, and whether the image is decoded into integral values in the pixel domain during the recompression process. Theoretical and experimental results indicate that this technique is effective for image authentication.", "title": "" }, { "docid": "a817c58408d1623cd82e243147c498ca", "text": "Very few attempts, if any, have been made to use visible light in corneal reflection approaches to the problem of gaze tracking. The reasons usually given to justify the limited application of this type of illumination are that the required image features are less accurately depicted, and that visible light may disturb the user. The aim of this paper is to show that it is possible to overcome these difficulties and build an accurate and robust gaze tracker under these circumstances. For this purpose, visible light is used to obtain the corneal reflection or glint in a way analogous to the well-known pupil center corneal reflection technique. Due to the lack of contrast, the center of the iris is tracked instead of the center of the pupil. The experiments performed in our laboratory have shown very satisfactory results, allowing free-head movement and no need of recalibration.", "title": "" }, { "docid": "26f393df2f3e7c16db2ee10d189efb37", "text": "Recently a few systems for automatically solving math word problems have reported promising results. However, the datasets used for evaluation have limitations in both scale and diversity. In this paper, we build a large-scale dataset which is more than 9 times the size of previous ones, and contains many more problem types. Problems in the dataset are semiautomatically obtained from community question-answering (CQA) web pages. A ranking SVM model is trained to automatically extract problem answers from the answer text provided by CQA users, which significantly reduces human annotation cost. Experiments conducted on the new dataset lead to interesting and surprising results.", "title": "" }, { "docid": "1436e4fddc73d33a6cf83abfa5c9eb02", "text": "The aim of our study was to provide a contribution to the research field of the critical success factors (CSFs) of ERP projects, with specific focus on smaller enterprises (SMEs). Therefore, we conducted a systematic literature review in order to update the existing reviews of CSFs. On the basis of that review, we led several interviews with ERP consultants experienced with ERP implementations in SMEs. As a result, we showed that all factors found in the literature also affected the success of ERP projects in SMEs. However, within those projects, technological factors gained much more importance compared to the factors that most influence the success of larger ERP projects. For SMEs, factors like the Organizational fit of the ERP system as well as ERP system tests were even more important than Top management support or Project management, which were the most important factors for large-scale companies.", "title": "" }, { "docid": "56c5ec77f7b39692d8b0d5da0e14f82a", "text": "Using tweets extracted from Twitter during the Australian 2010-2011 floods, social network analysis techniques were used to generate and analyse the online networks that emerged at that time. The aim was to develop an understanding of the online communities for the Queensland, New South Wales and Victorian floods in order to identify active players and their effectiveness in disseminating critical information. A secondary goal was to identify important online resources disseminated by these communities. Important and effective players during the Queensland floods were found to be: local authorities (mainly the Queensland Police Services), political personalities (Queensland Premier, Prime Minister, Opposition Leader, Member of Parliament), social media volunteers, traditional media reporters, and people from not-for-profit, humanitarian, and community associations. A range of important resources were identified during the Queensland flood; however, they appeared to be of a more general information nature rather than vital information and updates on the disaster. Unlike Queensland, there was no evidence of Twitter activity from the part of local authorities and the government in the New South Wales and Victorian floods. Furthermore, the level of Twitter activity during the NSW floods was almost nil. Most of the active players during the NSW and Victorian floods were volunteers who were active during the Queensland floods. Given the positive results obtained by the active involvement of the local authorities and government officials in Queensland, and the increasing adoption of Twitter in other parts of the world for emergency situations, it seems reasonable to push for greater adoption of Twitter from local and federal authorities Australia-wide during periods of mass emergencies.", "title": "" }, { "docid": "5b786dee43f6b2b15a53bb4f633aefb6", "text": "Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning.\n In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.", "title": "" }, { "docid": "bab246f8b15931501049862066fde77f", "text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.", "title": "" }, { "docid": "08faae46f98a8eab45049c9d3d7aa48e", "text": "One of the assumptions of attachment theory is that individual differences in adult attachment styles emerge from individuals' developmental histories. To examine this assumption empirically, the authors report data from an age 18 follow-up (Booth-LaForce & Roisman, 2012) of the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, a longitudinal investigation that tracked a cohort of children and their parents from birth to age 15. Analyses indicate that individual differences in adult attachment can be traced to variations in the quality of individuals' caregiving environments, their emerging social competence, and the quality of their best friendship. Analyses also indicate that assessments of temperament and most of the specific genetic polymorphisms thus far examined in the literature on genetic correlates of attachment styles are essentially uncorrelated with adult attachment, with the exception of a polymorphism in the serotonin receptor gene (HTR2A rs6313), which modestly predicted higher attachment anxiety and which revealed a Gene × Environment interaction such that changes in maternal sensitivity across time predicted attachment-related avoidance. The implications of these data for contemporary perspectives and debates concerning adult attachment theory are discussed.", "title": "" }, { "docid": "242cc9922b120057fe9f9066f257fb44", "text": "ion Yes No Partly Availability / Mobility No No No Fault tolerance Partly No Partly Flexibility / Event based Yes Partly Partly Uncertainty of information No No No", "title": "" }, { "docid": "36e6bf8dc6d693ca7297e20033ca6af5", "text": "The type III secretion system (TTSS) of gram-negative bacteria is responsible for delivering bacterial proteins, termed effectors, from the bacterial cytosol directly into the interior of host cells. The TTSS is expressed predominantly by pathogenic bacteria and is usually used to introduce deleterious effectors into host cells. While biochemical activities of effectors vary widely, the TTSS apparatus used to deliver these effectors is conserved and shows functional complementarity for secretion and translocation. This review focuses on proteins that constitute the TTSS apparatus and on mechanisms that guide effectors to the TTSS apparatus for transport. The TTSS apparatus includes predicted integral inner membrane proteins that are conserved widely across TTSSs and in the basal body of the bacterial flagellum. It also includes proteins that are specific to the TTSS and contribute to ring-like structures in the inner membrane and includes secretin family members that form ring-like structures in the outer membrane. Most prominently situated on these coaxial, membrane-embedded rings is a needle-like or pilus-like structure that is implicated as a conduit for effector translocation into host cells. A short region of mRNA sequence or protein sequence in effectors acts as a signal sequence, directing proteins for transport through the TTSS. Additionally, a number of effectors require the action of specific TTSS chaperones for efficient and physiologically meaningful translocation into host cells. Numerous models explaining how effectors are transported into host cells have been proposed, but understanding of this process is incomplete and this topic remains an active area of inquiry.", "title": "" }, { "docid": "012ac031a519d6e96d479b25a41afcdb", "text": "is one of the most comprehensively studied ingredients in the food supply. Yet, despite our considerable knowledge of caffeine and centuries of safe consumption in foods and beverages, questions and misperceptions about the potential health effects associated with caffeine persist. This Review provides up-to-date information on caffeine, examines its safety and summarizes the most recent key research conducted on caffeine and health. EXECUTIVE SUMMARY Caffeine is added to soft drinks as a flavoring agent; it imparts a bitterness that modifies the flavors of other components, both sour and sweet. Although there has been controversy as to its effectiveness in this role, a review of the literature suggests that caffeine does, in fact, contribute to the sensory appeal of soft drinks. [Drewnowski, 2001] Moderate intake of 300 mg/day (about three cups of coffee per day) of caffeine does not cause adverse health effects in healthy adults, although some groups, including those with hypertension and the elderly, may be more vulnerable. Also, regular consumers of coffee and other caffeinated beverages may experience some undesirable, but mild, short-lived symptoms if they stop consuming caffeine , particularly if the cessation is abrupt. However, there is little evidence of health risks of caffeine consumption. In fact, some evidence of health benefits exists for adults who consume moderate amounts of caffeine. Caffeine consumption may help reduce the risk of several chronic diseases, including diabetes, Parkinson's disease, liver disease, and colorectal cancer, as well as improve immune function. Large prospective cohort studies in the Netherlands, Finland, Sweden, and the United States have found caffeine consumption is associated with reduced risk of developing type 2 diabetes, although the mechanisms are unclear. Several other cohort studies have found that caffeine consumption from coffee and other beverages decreases the risk of Parkinson's Disease in men, as well as in women who have never used post-menopausal hormone replacement therapy. Epidemiological studies also suggest that coffee consumption may decrease the risk of liver injury, cirrhosis and hepatocellular carcinoma (liver cancer), although the reasons for these results have not been determined. In addition, coffee consumption appears to reduce the risk of colorectal cancer, but this has not generally been confirmed in prospective cohort studies. An anti-inflammatory effect has also been observed in a number of studies on caffeine's impact on the immune system. Most studies have found that caffeine consumption does not significantly increase the risk of coronary heart disease (CHD) or stroke. …", "title": "" } ]
scidocsrr
de385e14b5f6439568713d17bfac5a90
Learning Deep Structure-Preserving Image-Text Embeddings
[ { "docid": "c879ee3945592f2e39bb3306602bb46a", "text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.", "title": "" }, { "docid": "2052b47be2b5e4d0c54ab0be6ae1958b", "text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .", "title": "" }, { "docid": "c9ecb6ac5417b5fea04e5371e4250361", "text": "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "title": "" }, { "docid": "fa82b75a3244ef2407c2d14c8a3a5918", "text": "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.", "title": "" } ]
[ { "docid": "c7a902faf84eabe5c7d298c2c83c4617", "text": "Fangwei Li Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China lifw@cqupt.edu.cn Xinyue Zhang Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China zhangxinyue159@163.com Jiang Zhu Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China zhujiang@cqupt.edu.cn Yan Wang Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China wangyan2250@sina.com ABSTRACT In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS) was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP) algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.", "title": "" }, { "docid": "1635b235c59cc57682735202c0bb2e0d", "text": "The introduction of structural imaging of the brain by computed tomography (CT) scans and magnetic resonance imaging (MRI) has further refined classification of head injury for prognostic, diagnosis, and treatment purposes. We describe a new classification scheme to be used both as a research and a clinical tool in association with other predictors of neurologic status.", "title": "" }, { "docid": "f9076f4dbc5789e89ed758d0ad2c6f18", "text": "This paper presents an innovative manner of obtaining discriminative texture signatures by using the LBP approach to extract additional sources of information from an input image and by using fractal dimension to calculate features from these sources. Four strategies, called Min, Max, Diff Min and Diff Max , were tested, and the best success rates were obtained when all of them were employed together, resulting in an accuracy of 99.25%, 72.50% and 86.52% for the Brodatz, UIUC and USPTex databases, respectively, using Linear Discriminant Analysis. These results surpassed all the compared methods in almost all the tests and, therefore, confirm that the proposed approach is an effective tool for texture analysis. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fc7705cc3fc4b1114c4f7542ae210947", "text": "Arsenic (As) is one of the most toxic contaminants found in the environment. Development of novel detection methods for As species in water with the potential for field use has been an urgent need in recent years. In past decades, surface-enhanced Raman scattering (SERS) has gained a reputation as one of the most sensitive spectroscopic methods for chemical and biomolecular sensing. The SERS technique has emerged as an extremely promising solution for in-situ detection of arsenic species in the field, particularly when coupled with portable/handheld Raman spectrometers. In this article, the recent advances in SERS analysis of arsenic species in water media are reviewed, and the potential of this technique for fast screening and field testing of arsenic-contaminated environmental water samples is discussed. The problems that remain in the field are also discussed and an outlook for the future is featured at the end of the article.", "title": "" }, { "docid": "a1c9f24275ce626552602cf068776a3c", "text": "The field of topology optimization seeks to optimize shapes under structural objectives, such as achieving the most rigid shape using a given quantity of material. Besides optimal shape design, these methods are increasingly popular as design tools, since they automatically produce structures having desirable physical properties, a task hard to perform by hand even for skilled designers. However, there is no simple way to control the appearance of the generated objects.\n In this paper, we propose to optimize shapes for both their structural properties and their appearance, the latter being controlled by a user-provided pattern example. These two objectives are challenging to combine, as optimal structural properties fully define the shape, leaving no degrees of freedom for appearance. We propose a new formulation where appearance is optimized as an objective while structural properties serve as constraints. This produces shapes with sufficient rigidity while allowing enough freedom for the appearance of the final structure to resemble the input exemplar.\n Our approach generates rigid shapes using a specified quantity of material while observing optional constraints such as voids, fills, attachment points, and external forces. The appearance is defined by examples, making our technique accessible to casual users. We demonstrate its use in the context of fabrication using a laser cutter to manufacture real objects from optimized shapes.", "title": "" }, { "docid": "dcfa6f1137c042af4b6fda829b617bb8", "text": "In this letter, a novel dumbbell-shaped slot along with thin substrate integrated waveguide (SIW) cavity backing ( ) is used to design dual-frequency slot antenna. The proposed design exhibits unidirectional radiation characteristics, high gain, high front to back ratio (FTBR) at each resonant frequency while maintaining low profile, planar configuration. The unique slot shape helps to introduce complex current distribution at different frequencies that results in simultaneous excitation of hybrid mode at higher frequency along with conventional TE120 mode in the cavity. Both conventional mode and the hybrid mode helps the modified slot to radiate at the corresponding resonant frequencies resulting in compact, dual-frequency antenna. A fabricated prototype is also presented which resonates at 9.5 GHz and 13.85 GHz with impedance bandwidth more than 1.5% at both resonant frequencies and gain of 4.8 dBi and 3.74 dBi respectively. The front-to-back ratio of the antenna are above 10 dB at both operating frequencies.", "title": "" }, { "docid": "b7dcd24f098965ff757b7ce5f183662b", "text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.", "title": "" }, { "docid": "6504d0174ca664d4975e7bd8baafe8f9", "text": "A framework for evaluating security risks associated with technologies used at home.", "title": "" }, { "docid": "d622cf283f27a32b2846a304c0359c5f", "text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.", "title": "" }, { "docid": "ad0892ee2e570a8a2f5a90883d15f2d2", "text": "Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by bootstrapping additional training data. This is done by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from multiple sources. If our system can make a highconfidence extraction of some mentions in such a cluster, it can then acquire diverse training examples by adding the other mentions as well. Our experiments show significant performance improvements on multiple event extractors over ACE 2005 and TAC-KBP 2015 datasets.", "title": "" }, { "docid": "cf768855de6b9c33a1b8284b4e24383f", "text": "The Value Sensitive Design (VSD) methodology provides a comprehensive framework for advancing a value-centered research and design agenda. Although VSD provides helpful ways of thinking about and designing value-centered computational systems, we argue that the specific mechanics of VSD create thorny tensions with respect to value sensitivity. In particular, we examine limitations due to value classifications, inadequate guidance on empirical tools for design, and the ways in which the design process is ordered. In this paper, we propose ways of maturing the VSD methodology to overcome these limitations and present three empirical case studies that illustrate a family of methods to effectively engage local expressions of values. The findings from our case studies provide evidence of how we can mature the VSD methodology to mitigate the pitfalls of classification and engender a commitment to reflect on and respond to local contexts of design.", "title": "" }, { "docid": "a6ea435c346d2d3051d1fc31db59ca35", "text": "As news reading on social media becomes more and more popular, fake news becomes a major issue concerning the public and government. The fake news can take advantage of multimedia content to mislead readers and get dissemination, which can cause negative effects or even manipulate the public events. One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events. Unfortunately, most of the existing approaches can hardly handle this challenge, since they tend to learn event-specific features that can not be transferred to unseen events. In order to address this issue, we propose an end-to-end framework named Event Adversarial Neural Network (EANN), which can derive event-invariant features and thus benefit the detection of fake news on newly arrived events. It consists of three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The multi-modal feature extractor is responsible for extracting the textual and visual features from posts. It cooperates with the fake news detector to learn the discriminable representation for the detection of fake news. The role of event discriminator is to remove the event-specific features and keep shared features among events. Extensive experiments are conducted on multimedia datasets collected from Weibo and Twitter. The experimental results show our proposed EANN model can outperform the state-of-the-art methods, and learn transferable feature representations.", "title": "" }, { "docid": "9b40db1e69a3ad1cc2a1289791e82ae1", "text": "As a nascent area of study, gamification has attracted the interest of researchers in several fields, but such researchers have scarcely focused on creating a theoretical foundation for gamification research. Gamification involves using gamelike features in non-game contexts to motivate users and improve performance outcomes. As a boundary-spanning subject by nature, gamification has drawn the interest of scholars from diverse communities, such as information systems, education, marketing, computer science, and business administration. To establish a theoretical foundation, we need to clearly define and explain gamification in comparison with similar concepts and areas of research. Likewise, we need to define the scope of the domain and develop a research agenda that explicitly considers theory’s important role. In this review paper, we set forth the pre-theoretical structures necessary for theory building in this area. Accordingly, we engaged an interdisciplinary group of discussants to evaluate and select the most relevant theories for gamification. Moreover, we developed exemplary research questions to help create a research agenda for gamification. We conclude that using a multi-theoretical perspective in creating a research agenda should help and encourage IS researchers to take a lead role in this promising and emerging area.", "title": "" }, { "docid": "6eb2c0e22ecc0816cb5f83292902d799", "text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.", "title": "" }, { "docid": "e5752ff995d5c1133761986223269883", "text": "Although much research has been performed on the adoption and usage phases of the information systems life cycle, the final phase, termination, has received little attention. This paper focuses on the development of discontinuous usage intentions, i.e. the behavioural intention in the termination phase, in the context of social networking services (SNSs), where it plays an especially crucial role. We argue that users stressed by using SNSs try to avoid the stress and develop discontinuous usage intentions, which we identify as a behavioural response to SNS-stress creators and SNS-exhaustion. Furthermore, as discontinuing the use of an SNS also takes effort and has costs, we theorize that switching-stress creators and switching-exhaustion reduce discontinuous usage intentions. We tested and validated these effects empirically in an experimental setting monitoring individuals who stopped using Facebook for a certain period and switched to alternatives. Our results show that SNS-stress creators and SNS-exhaustion cause discontinuous usage intentions, and switching-stress creators and switching-exhaustion reduce these intentions.", "title": "" }, { "docid": "d4ee96388ca88c0a5d2a364f826dea91", "text": "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.", "title": "" }, { "docid": "3dc24285dac52753122c0f974da7b069", "text": "Jeremy Franklin, Richard Galletly, Cat Hines, Charlotte Hogg, Tom Mann, Matthew Manning, Alex Mitchell, Ben Nelson, Raakhi Odedra, Charlotte Pope-Williams, Alice Pugh, Amandeep Rehlon, May Rostom, Emma Sinclair, Anne Wetherilt and Richard Wyatt for their comments and contributions. I would also like to thank Professors Philip Bond and Heidi Johansen-Berg and Alan Milburn and Emma Hardaker-Jones for their insights.", "title": "" }, { "docid": "b34485c65c4e6780166ea0af5f13c08a", "text": "The rise of the Internet of Things (IoT) and the recent focus on a gamut of 'Smart City' initiatives world-wide have pushed for new advances in data stream systems to (1) support complex analytics and evolving graph applications as continuous queries, and (2) deliver fast and scalable processing on large data streams. Unfortunately current continuous query languages (CQL) lack the features and constructs needed to support the more advanced applications. For example recursive queries are now part of SQL, Datalog, and other query languages, but they are not supported by most CQLs, a fact that caused a significant loss of expressive power, which is further aggravated by the limitation that only non-blocking queries can be supported. To overcome these limitations we have developed an a dvanced st ream r easo ning system ASTRO that builds on recent advances in supporting aggregates in recursive queries. In this demo, we will briefly elucidate the formal Streamlog semantics, which combined with the Pre-Mappability (PreM) concept, allows the declarative specification of many complex continuous queries, which are then efficiently executed in real-time by the portable ASTRO architecture. Using different case studies, we demonstrate (i) the ease-of-use, (ii) the expressive power and (iii) the robustness of our system, as compared to other state-of-the-art declarative CQL systems.", "title": "" }, { "docid": "e02050f14a7567bc6d4b439b8ed7fc48", "text": "The accumulation mechanisms of technetium-99m methylene diphosphonate (99mTc-MDP) were investigated using hydroxyapatite powder and various phosphates. After reaction with99mTc-MDP, radioactivity was analyzed using a scintillation counter. The adsorption of99mTc-MDP onto hydroxyapatite occurred within 30 sec, and was not temperature dependent at 0–95°C. There was no change in the adsorption of99mTc-MDP onto hydroxyapatite in 5 or 50mM water-soluble organic compounds (glucose or urea). Anions had a greater effect on adsorption than cations. The only phosphate with adsorption equal to that of hydroxyapatite was calcium pyrophosphate. Adsorption onto calcium hydrogenphosphate was low at a pH of 6.0 in comparison with hydroxyapatite. These findings suggest that the adsorption of99mTc-MDP onto hydroxyapatite is influenced by the concentration of coexisting anions and by the chemical constitution of the phosphate components.", "title": "" }, { "docid": "3c5bb0b08b365029a3fc1a7ef73e3aa7", "text": "This paper proposes an estimation method to identify the electrical model parameters of photovoltaic (PV) modules and makes a comparison with other methods already popular in the technical literature. Based on the full single-diode model, the mathematical description of the I-V characteristic of modules is generally represented by a coupled nonlinear equation with five unknown parameters, which is difficult to solve by an analytical approach. The aim of the proposed method is to find the five unknown parameters that guarantee the minimum absolute error between the P-V curves generated by the electrical model and the P-V curves provided by the manufacturers' datasheets for different external conditions such as temperature and irradiance. The first advantage of the proposed method is that the parameters are estimated using the P-V curves instead of I-V curves, since most of the applications that use the electrical model want to accurately estimate the extracted power. The second advantage is that the value ranges of each unknown parameter respect their physical meaning. In order to prove the effectiveness of the proposition, a comparison among methods is carried out using both types of P-V and I-V curves: those obtained by manufacturers' datasheets and those extracted experimentally in the laboratory.", "title": "" } ]
scidocsrr
84a41859a72ebbb17541a31a550c799c
Coresets for Discrete Integration and Clustering
[ { "docid": "8ffb63dcee3bc0f541e3ec0df0d46be5", "text": "In this paper, we show the existence of small coresets for the problems of computing k-median and kmeans clustering for points in low dimension. In other words, we show that given a point set P in <, one can compute a weighted set S ⊆ P , of size O(kε−d log n), such that one can compute the k-median/means clustering on S instead of on P , and get an (1 + ε)-approximation. As a result, we improve the fastest known algorithms for (1+ε)-approximate k-means and k-median. Our algorithms have linear running time for a fixed k and ε. In addition, we can maintain the (1+ε)-approximate k-median or k-means clustering of a stream when points are being only inserted, using polylogarithmic space and update time.", "title": "" } ]
[ { "docid": "f97b6740fb9648d1ae4d18d90ca739b5", "text": "In this paper, we draw on the literatures on path dependence and disruptive innovation to examine in an experimental setting how path-dependent firms respond to digital disruption. As our results indicate, in the face of digital disruption, path-dependent firms tend to renovate the technological foundation on which their strategic path is based if they have the opportunity to reproduce their established strategic path. Our findings also suggest that path-dependent firms equally tend to renovate their technological foundation or the targeted market segment in the face of digital disruption if they are unable to reproduce their established strategic path. Our findings provide insights into the challenges that digitization imposes on established firms, complement the literature on path dependence with insights into path disruption, contribute an integrated view to the literature on disruptive innovation, and offer some guidance to practitioners.", "title": "" }, { "docid": "53888fb785c159f1b0cabe5357231238", "text": "In this paper, we propose a smart parking system detecting and finding the parked location of a consumer's vehicle. Using ultrasonic and magnetic sensor, the proposed system detects vehicles in indoor and outdoor parking fields, accurately. Wireless sensor motes support a vehicle location service in parking lots using BLE.", "title": "" }, { "docid": "bda7775f0ec70cf1f80093d484e84332", "text": "Comprehensive situational awareness is paramount to the effectiveness of proprietary navigational and higher-level functions of intelligent vehicles. In this paper, we address a graph-based approach for 2D road representation of 3D point clouds with respect to the road topography. We employ the gradient cues of the road geometry to construct a Markov Random Filed (MRF) and implement an efficient belief propagation (BP) algorithm to classify the road environment into four categories, i.e. the reachable region, the drivable region, the obstacle region and the unknown region. The proposed approach can overcome a wide variety of practical challenges, such as sloped terrains, rough road surfaces, rolling/pitching of the host vehicle, etc., and represent the road environment accurately as well as robustly. Experimental results in typical but challenging environments have substantiated that the proposed approach is more sensitive and reliable than the conventional vertical displacements analysis and show superior performance against other local classifiers.", "title": "" }, { "docid": "1cc962ab0d15a47725858ed5ff5872f6", "text": "Although spontaneous remyelination does occur in multiple sclerosis lesions, its extent within the global population with this disease is presently unknown. We have systematically analysed the incidence and distribution of completely remyelinated lesions (so-called shadow plaques) or partially remyelinated lesions (shadow plaque areas) in 51 autopsies of patients with different clinical courses and disease durations. The extent of remyelination was variable between cases. In 20% of the patients, the extent of remyelination was extensive with 60-96% of the global lesion area remyelinated. Extensive remyelination was found not only in patients with relapsing multiple sclerosis, but also in a subset of patients with progressive disease. Older age at death and longer disease duration were associated with significantly more remyelinated lesions or lesion areas. No correlation was found between the extent of remyelination and either gender or age at disease onset. These results suggest that the variable and patient-dependent extent of remyelination must be considered in the design of future clinical trials aimed at promoting CNS repair.", "title": "" }, { "docid": "16fa16d1d27b5800c119a300f4c4a79c", "text": "Deep learning recently shows strong competitiveness to improve polar code decoding. However, suffering from prohibitive training and computation complexity, the conventional deep neural network (DNN) is only possible for very short code length. In this paper, the main problems of deep learning in decoding are well solved. We first present the multiple scaled belief propagation (BP) algorithm, aiming at obtaining faster convergence and better performance. Based on this, deep neural network decoder (NND) with low complexity and latency, is proposed for any code length. The training only requires a small set of zero codewords. Besides, its computation complexity is close to the original BP. Experiment results show that the proposed (64,32) NND with 5 iterations achieves even lower bit error rate (BER) than the 30-iteration conventional BP and (512, 256) NND also outperforms conventional BP decoder with same iterations. The hardware architecture of basic computation block is given and folding technique is also considered, saving about 50% hardware cost.", "title": "" }, { "docid": "afee419227629f8044b5eb0addd65ce3", "text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.", "title": "" }, { "docid": "1d483a47ff5c735fd0ee78dfdb9bd4f0", "text": "This paper is concerned with graphical criteria that can be used to solve the problem of identifying casual effects from nonexperimental data in a causal Bayesian network structure, i.e., a directed acyclic graph that represents causal relationships. We first review Pearl’s work on this topic [Pearl, 1995], in which several useful graphical criteria are presented. Then we present a complete algorithm [Huang and Valtorta, 2006b] for the identifiability problem. By exploiting the completeness of this algorithm, we prove that the three basicdo-calculus rulesthat Pearl presents are complete, in the sense that, if a causal effect is identifiable, there exists a sequence of applications of the rules of the do-calculus that transforms the causal effect formula into a formula that only includes observational quantities.", "title": "" }, { "docid": "7d314a77d3b853f37b2b3d59d1255af7", "text": "The paper introduces first insights into a methodology for developing eBusiness business models, which was elaborated at evolaris and is currently validated in various business cases. This methodology relies upon a definition of the term business model, which is first examined and upon which prerequisites for such a methodology are presented. A business model is based on a mental representation of certain aspects of the real world that are relevant for the business. Supporting this change of the mental model is therefore a major prerequisite for a methodology for developing business models. This paper demonstrates that it addition, a business model discussion should be theory based, able to handle complex systems, provide a way for risk free experiments and be practically applicable. In order to fulfill the above critieria, the evolaris methodology is grounded on system theory and combines aspects of system dynamics and action research.", "title": "" }, { "docid": "39ee94a9e947b503bf98db33767bec09", "text": "Female genital mutilation/cutting (FGM/C), which can result in severe pain, haemorrhage and poor birth outcomes, remains a major public health issue. The extent to which prevalence of and attitudes toward the practice have changed in Egypt since its criminalisation in 2008 is unknown. We analysed data from the 2005, 2008 and 2014 Egypt Demographic and Health Surveys to assess trends related to FGM/C. Specifically, we determined whether FGM/C prevalence among ever-married, 15-19-year-old women had changed from 2005 to 2014. We also assessed whether support for FGM/C continuation among ever-married reproductive-age (15-49 years) women had changed over this time period. The prevalence of FGM/C among adolescent women statistically significantly decreased from 94% in 2008 to 88% in 2014 (standard error [SE] = 1.5), after adjusting for education, residence and religion. Prevalence of support for the continuation of FGM/C also statistically significantly decreased from 62% in 2008 to 58% in 2014 (SE = 0.6). The prevalence of FGM/C among ever-married women aged 15-19 years in Egypt has decreased since its criminalisation in 2008, but continues to affect the majority of this subgroup. Likewise, support of FGM/C continuation has also decreased, but continues to be held by a majority of ever-married women of reproductive age.", "title": "" }, { "docid": "7a5f0301f883a15114c955df0aa8d87e", "text": "Random Forests (RFs) are frequently used in many computer vision and machine learning applications. Their popularity is mainly driven by their high computational efficiency during both training and evaluation while achieving state-of-the-art results. However, in most applications RFs are used off-line. This limits their usability for many practical problems, for instance, when training data arrives sequentially or the underlying distribution is continuously changing. In this paper, we propose a novel on-line random forest algorithm. We combine ideas from on-line bagging, extremely randomized forests and propose an on-line decision tree growing procedure. Additionally, we add a temporal weighting scheme for adaptively discarding some trees based on their out-of-bag-error in given time intervals and consequently growing of new trees. The experiments on common machine learning data sets show that our algorithm converges to the performance of the off-line RF. Additionally, we conduct experiments for visual tracking, where we demonstrate real-time state-of-the-art performance on well-known scenarios and show good performance in case of occlusions and appearance changes where we outperform trackers based on on-line boosting. Finally, we demonstrate the usability of on-line RFs on the task of interactive real-time segmentation.", "title": "" }, { "docid": "d502d0c14b332f9847902a2b7a087eba", "text": "The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.", "title": "" }, { "docid": "fb80c27ab2615373a316605082adadbb", "text": "The use of sparse representations in signal and image processing is gradually increasing in the past several years. Obtaining an overcomplete dictionary from a set of signals allows us to represent them as a sparse linear combination of dictionary atoms. Pursuit algorithms are then used for signal decomposition. A recent work introduced the K-SVD algorithm, which is a novel method for training overcomplete dictionaries that lead to sparse signal representation. In this work we propose a new method for compressing facial images, based on the K-SVD algorithm. We train K-SVD dictionaries for predefined image patches, and compress each new image according to these dictionaries. The encoding is based on sparse coding of each image patch using the relevant trained dictionary, and the decoding is a simple reconstruction of the patches by linear combination of atoms. An essential pre-process stage for this method is an image alignment procedure, where several facial features are detected and geometrically warped into a canonical spatial location. We present this new method, analyze its results and compare it to several competing compression techniques. 2008 Published by Elsevier Inc.", "title": "" }, { "docid": "4ab58e47f1f523ba3f48c37bc918696e", "text": "In this work, we design a neural network for recognizing emotions in speech, using the standard IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting highlevel features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. Applying techniques of data augmentation, layerwise learning rate adjustment and batch normalization, we obtain highly competitive results, with 64.5% weighted accuracy and 61.7% unweighted accuracy on four emotions. Moreover, we show that the model performance is strongly correlated with the labeling confidence, which highlights a fundamental difficulty in emotion recognition.", "title": "" }, { "docid": "a36fae7ccd3105b58a4977b5a2366ee8", "text": "As the number of big data management systems continues to grow, users increasingly seek to leverage multiple systems in the context of a single data analysis task. To efficiently support such hybrid analytics, we develop a tool called PipeGen for efficient data transfer between database management systems (DBMSs). PipeGen automatically generates data pipes between DBMSs by leveraging their functionality to transfer data via disk files using common data formats such as CSV. PipeGen creates data pipes by extending such functionality with efficient binary data transfer capabilities that avoid file system materialization, include multiple important format optimizations, and transfer data in parallel when possible. We evaluate our PipeGen prototype by generating 20 data pipes automatically between five different DBMSs. The results show that PipeGen speeds up data transfer by up to 3.8× as compared to transferring using disk files.", "title": "" }, { "docid": "541a5ba448c26aff97b8bfa09bad482e", "text": "By analyzing the shortcomings of various types of cycloid drive, a kind of new cycloid drive that named as output-pin-wheel cycloid drive is innovated. The structure of reducer and the structural dimensions of specific components are designed by analyzing its working principle. In order to validating the correctness of drive model and assembly relations of parts, a three-dimensional solid modeling and virtual prototyping are built. Dynamic simulation analysis of virtual prototyping is done in the no-load and full load conditions, which provides the theory basis for manufacture and optimization of the reducer.", "title": "" }, { "docid": "b4fb3d502f87c2114d6c5b0fc9b6f2aa", "text": "A new power semiconductor device called the Insulated Gate Rectifier (IGR) is described in this paper. This device has the advantages of operating at high current densities while requiring low gate drive power. The devices exhibit relatively slow switching speeds due to bipolar operation. The results of two dimensional computer modelling of the device structure are compared with measurements taken on devices fabricated with 600 volt forward and reverse blocking capability.", "title": "" }, { "docid": "82c4aa6bc189e011556ca7aa6d1688b9", "text": "Two aspects of children’s early gender development the spontaneous production of gender labels and sex-typed play were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children’s gender labeling as based on mothers’ biweekly reports on their children’s language from 9 through 21 months. Videotapes of children’s play both alone and with mother at 17 and 21 months were independently analyzed for play with gender stereotyped and neutral toys. Finally, the relation between gender labeling and sex-typed play was examined. Children transitioned to using gender labels at approximately 19 months on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in sex-typed play, suggesting that knowledge of gender categories might influence sex-typing before the age of 2.", "title": "" } ]
scidocsrr
e290370c64a8c5c4e0365e76e3c13124
Subordinating and coordinating discourse relations
[ { "docid": "be60fa48b6cc666272911b28a5061899", "text": "In this paper, we offer a novel analysis of presuppositions, paving particular attention to the interaction between the knowledge resources that are required to interpret them. The analysis has two main features. First, we capture an analogy between presuppositions, anaphora and scope ambiguity (cf. van der Sandt 1992), by utilizing semantic underspecification (c£ Reyle 1993). Second, resolving this underspccification requires reasoning about how the presupposition is rhetorically connected to the discourse context. This has several consequences. First, since pragmatic information plays a role in computing the rhetorical relation, it also constrains the interpretation of presuppositions. Our account therefore provides a formal framework for analysing problematic data, which require pragmatic reasoning. Second, binding presuppositions to the context via rhetorical links replaces accommodating them, in the sense of adding them to the context (cf. Lewis 1979). The treatment of presupposition is thus generalized and integrated into the discourse update procedure. We formalize this approach in SDKT (Asher 1993; Lascarides & Asher 1993), and demonstrate that it provides a rich framework for interpreting presuppositions, where semantic and pragmatic constraints arc integrated. 1 I N T R O D U C T I O N The interpretation of a presupposition typically depends on the context in which it is made. Consider, for instance, sentences (i) vs. (2), adapted from van der Sandt (1992); the presupposition triggered by Jack's son (that Jack has a son) is implied by (1), but not by (2). (1) If baldness is hereditary, then Jack's son is bald. (2) If Jack has a son, then Jack's son is bald. The challenge for a formal semantic theory of presuppositions is to capture contextual effects such as these in an adequate manner. In particular, such a theory must account for why the presupposition in (1) projects from an embedded context, while the presupposition in (2) does not This is a special case of the Projection Problem; If a compound sentence S is made up of 240 The Semantics and Pragmatics of Presupposition constituent sentences 5, , . . . ,Sn , each with presuppositions P, ,... ,Pn, then what are the presuppositions of 5? Many recent accounts of presupposition that offer solutions to the Projection Problem have exploited the dynamics in dynamic semantics (e.g. Beaver 1996; Geurts 1996; Heim 1982; van der Sandt 1992). In these frameworks, assertional meaning is a relation between an input context (or information state) and an output context Presuppositions impose tests on the input context, which researchers have analysed in two ways: either the context must satisfy the presuppositions of the clause being interpreted (e.g. Beaver 1996; Heim 1982) or the presuppositions are anaphoric (e.g. van der Sandt 1992) and so must be bound to elements in the context But clauses carrying presuppositions can be felicitous even when the context fails these tests (e.g. (1)). A special purpose procedure known as accommodation is used to account for this (cf. Lewis 1979): if the context fails the presupposition test, then the presupposition is accommodated or added to it, provided various constraints are met (e.g. the result must be satisfiable). This combination of test and accommodation determines the projection of a presupposition. For example, in (1), the antecedent produces a context which fails the test imposed by the presupposition in the consequent (cither satisfaction or binding). So it is accommodated. Since it can be added to the context outside the scope of the conditional, it can project out from its embedding. In contrast, the antecedent in (2) ensures that the input context passes the presupposition test So the presupposition is not accommodated, the input context is not changed, and the presupposition is not projected out from the conditional. Despite these successes, this approach has trouble with some simple predictions. Compare the following two dialogues (3abc) and (3abd): (3) a. A: Did you hear about John? b. B: No, what? c A: He had an accident. A car hit him. d. A: He had an accident ??The car hit him. The classic approach we just outlined would predict no difference between these two discourses and would find them both acceptable. But (3abd) is unacceptable. As it stands it lacks discourse coherence, while (3abc) does not; the presupposition of the car cannot be accommodated in (3abd). We will argue that the proper treatment of presuppositions in discourse, like a proper treatment of assertions, requires a notion of discourse coherence and must take into account the rhetorical function of both presupposed and asserted information. We will provide a formal account of presuppositions, which integrates constraints from compositional semantics and pragmatics in the required manner. Nicholas Asher and Alex Lascaridn 241 We will start by examining van der Sandt's theory of presupposition satisfaction, since he offers the most detailed proposal concerning accommodatioa We will highlight some difficulties, and offer a new proposal which attempts to overcome them. We will adopt van der Sandt's view that presuppositions are anaphoric, but give it some new twists. First, like other anaphoric expressions (e.g. anaphoric pronouns), presuppositions have an underspecified semantic content Interpreting them in context involves resolving the underspecification. The second distinctive feature is the way we resolve underspecification. We assume a formal model of discourse semantics known as SDRT (e.g. Asher 1993; Lascarides & Asher 1993). where semantic underspecification in a proposition is resolved by reasoning about the way that proposition rhetorically connects to the discourse context Thus, interpreting presuppositions becomes a part of discourse update in SDRT. This has three important consequences. The first concerns pragmatics. SDRT provides an explicit formal account of how semantic and pragmatic information interact when computing a rhetorical link between a proposition and its discourse context This interaction will define the interpretation of presuppositions, and thus provide a richer source of constraints on presuppositions than standard accounts. This account of presuppositions will exploit pragmatic information over and above the clausal implicatures of the kind used in Gazdar's (1979) theory of presuppositions. We'll argue in section 2 that going beyond these implicatures is necessary to account for some of the data. The second consequence of interpreting presuppositions is SDRT concerns accommodation. In all previous dynamic theories of presupposition, accommodation amounts to adding, but not relating, the presupposed content to some accessible part of the context This mechanism is peculiar to presuppositions; it does not feature in accounts of any other phenomena, including other anaphoric phenomena. In contrast, we model presuppositions entirely in terms of the SDRT discourse update procedure. We replace the notion that presuppositions are added to the discourse context with the notion that they are rhetorically linked to it Given that the theory of rhetorical structure in SDRT is used to model a wide range of linguistic phenomena when applied to assertions, it would be odd if presupposed information were to be entirely insensitive to rhetorical function. We will show that presupposed information is sensitive to rhetorical function and that the notion of accommodation should be replaced with a more constrained notion of discourse update. The third consequence concerns the compositional treatment of presupposition. Our approach affords that one could call a compositional treatment of presuppositions. The discourse semantics of SDRT is The Semantics and Pragmatics of Presupposition compositional upon discourse structure: the meaning of a discourse is a function of the meaning of its parts and how they are related to each other. In SDRT presuppositions, like assertions, generate underspecified but interpretable logical forms. The procedure for constructing the semantic representation of discourse takes these underspecified logical forms, resolves some of the underspecifications and relates them together by means of discourse relations representing their rhetorical function in the discourse. So presuppositions have a content that contributes to the content of the discourse as a whole. Indeed, presuppositions have no less a compositional treatment than assertions. Our discourse-based approach affords a wider perspective on presuppositions. Present dynamic accounts of presupposition have concentrated on phenomena like the Projection Problem. For us the Projection Problem amounts to an important special case, which applies to single sentence discourses, of the more general 'discourse' problem: how do presuppositions triggered by elements of a multi-sentence discourse affect its structure and content? We aim to tackle this question here. And we claim that a rich notion of discourse structure, which utilizes rhetorical relations, is needed. While we believe that our discourse based theory of presupposition is novel, we hasten to add that many authors on presupposition like Beaver (1996) and van der Sandt (1992) would agree with us that the treatment of presupposition must be integrated with a richer notion of discourse structure and discourse update than is available in standard dynamic semantics (e.g. Kamp & Reyle's DRT, Dynamic Predicate Logic or Update Semantics), because they believe that pragmatic information constrains the interpretation of presuppositions. We wish to extend their theories with this requisite notion of discourse structure. 2 VAN DER SANDT'S DYNAMIC A C C O U N T AND ITS PROBLEMS Van der Sandt (1992) views presuppositions as anaphors with semantic content He develops this view within the framework of DRT (Kamp & Reyle 1993), in order to exploit its constraints on anaphoric antecedents. A presupposition can bind t", "title": "" } ]
[ { "docid": "6992762ad22f9e33db6ded9430e06848", "text": "Solution M and C are strictly dominated and hence cannot receive positive probability in any Nash equilibrium. Given that only L and R receive positive probability, T cannot receive positive probability either. So, in any Nash equilibrium player 1 must play B with probability one. Given that, any probability distribution over L and R is a best response for player 2. In other words, the set of Nash equilibria is given by", "title": "" }, { "docid": "bd1b178ad5eabe9d40319ebada94146b", "text": "The emergence and abundance of cooperation in nature poses a tenacious and challenging puzzle to evolutionary biology. Cooperative behaviour seems to contradict Darwinian evolution because altruistic individuals increase the fitness of other members of the population at a cost to themselves. Thus, in the absence of supporting mechanisms, cooperation should decrease and vanish, as predicted by classical models for cooperation in evolutionary game theory, such as the Prisoner's Dilemma and public goods games. Traditional approaches to studying the problem of cooperation assume constant population sizes and thus neglect the ecology of the interacting individuals. Here, we incorporate ecological dynamics into evolutionary games and reveal a new mechanism for maintaining cooperation. In public goods games, cooperation can gain a foothold if the population density depends on the average population payoff. Decreasing population densities, due to defection leading to small payoffs, results in smaller interaction group sizes in which cooperation can be favoured. This feedback between ecological dynamics and game dynamics can generate stable coexistence of cooperators and defectors in public goods games. However, this mechanism fails for pairwise Prisoner's Dilemma interactions and the population is driven to extinction. Our model represents natural extension of replicator dynamics to populations of varying densities.", "title": "" }, { "docid": "66ab42e668afaf95c39b378518e60198", "text": "OBJECTIVE\nTo present a guideline-derived mnemonic that provides a systematic monitoring process to increase pharmacists' confidence in total parenteral nutrition (TPN) monitoring and improve safety and efficacy of TPN use.\n\n\nDATA SOURCES\nThe American Society for Parenteral and Enteral Nutrition (ASPEN) guidelines were reviewed. Additional resources included a literature search of PubMed (1980 to May 2016) using the search terms: total parenteral nutrition, mnemonic, indications, allergy, macronutrients, micronutrients, fluid, comorbidities, labs, peripheral line, and central line. Articles (English-language only) were evaluated for content, and additional references were identified from a review of literature citations.\n\n\nSTUDY SELECTION AND DATA EXTRACTION\nAll English-language observational studies, review articles, meta-analyses, guidelines, and randomized trials assessing monitoring parameters of TPN were evaluated.\n\n\nDATA SYNTHESIS\nThe ASPEN guidelines were referenced to develop key components of the mnemonic. Review articles, observational trials, meta-analyses, and randomized trials were reviewed in cases where guidelines did not adequately address these components.\n\n\nCONCLUSIONS\nA guideline-derived mnemonic was developed to systematically and safely manage TPN therapy. The mnemonic combines 7 essential components of TPN use and monitoring: Indications, Allergies, Macro/Micro nutrients, Fluid, Underlying comorbidities, Labs, and Line type.", "title": "" }, { "docid": "7de923c310b38193b2d4d3bd9e7096bb", "text": "To date, most research into massively multiplayer online role-playing games (MMORPGs) has examined the demographics of play. This study explored the social interactions that occur both within and outside of MMORPGs. The sample consisted of 912 self-selected MMORPG players from 45 countries. MMORPGs were found to be highly socially interactive environments providing the opportunity to create strong friendships and emotional relationships. The study demonstrated that the social interactions in online gaming form a considerable element in the enjoyment of playing. The study showed MMORPGs can be extremely social games, with high percentages of gamers making life-long friends and partners. It was concluded that virtual gaming may allow players to express themselves in ways they may not feel comfortable doing in real life because of their appearance, gender, sexuality, and/or age. MMORPGs also offer a place where teamwork, encouragement, and fun can be experienced.", "title": "" }, { "docid": "291628b7e68f897bf23ca1ad1c0fdcfd", "text": "Device-free Passive (DfP) human detection acts as a key enabler for emerging location-based services such as smart space, human-computer interaction, and asset security. A primary concern in devising scenario-tailored detecting systems is coverage of their monitoring units. While disk-like coverage facilitates topology control, simplifies deployment analysis, and is crucial for proximity-based applications, conventional monitoring units demonstrate directional coverage due to the underlying transmitter-receiver link architecture. To achieve omnidirectional coverage under such link-centric architecture, we propose the concept of omnidirectional passive human detection. The rationale is to exploit the rich multipath effect to blur the directional coverage. We harness PHY layer features to robustly capture the fine-grained multipath characteristics and virtually tune the shape of the coverage of the monitoring unit, which is previously prohibited with mere MAC layer RSSI. We design a fingerprinting scheme and a threshold-based scheme with off-the-shelf WiFi infrastructure and evaluate both schemes in typical clustered indoor scenarios. Experimental results demonstrate an average false positive of 8 percent and an average false negative of 7 percent for fingerprinting in detecting human presence in 4 directions. And both average false positive and false negative remain around 10 percent even with threshold-based methods.", "title": "" }, { "docid": "0d5ba680571a9051e70ababf0c685546", "text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization", "title": "" }, { "docid": "756b25456494b3ece9b240ba3957f91c", "text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.", "title": "" }, { "docid": "b443608eadf6fe51d01ef1e443e9a371", "text": "Nowadays Web search engines are experiencing significant performance challenges caused by a huge amount of Web pages and increasingly larger number of Web users. The key issue for addressing these challenges is to design a compact structure which can index Web documents with low space and meanwhile process keyword search very fast. Unfortunately, the current solutions typically separate the space optimization from the search improvement. As a result, such solutions either save space yet with search inefficiency, or allow fast keyword search but with huge space requirement. In this paper, to address the challenges, we propose a novel structure bitlist with both low space requirement and supporting fast keyword search. Specifically, based on a simple and yet very efficient encoding scheme, bitlist uses a single number to encode a set of integer document IDs for low space, and adopts fast bitwise operations for very efficient boolean-based keyword search. Our extensive experimental results on real and synthetic data sets verify that bitlist outperforms the recent proposed solution, inverted list compression [23, 22] by spending 36.71% less space and 61.91% faster processing time, and achieves comparable running time as [8] but with significantly lower space.", "title": "" }, { "docid": "4cc4c8fd07f30b5546be2376c1767c19", "text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "be8864d6fb098c8a008bfeea02d4921a", "text": "Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.", "title": "" }, { "docid": "91a019627b2d443bc244c57e445c59b3", "text": "The ego depletion effect is one of the most famous phenomena in social psychology. A recent meta-analysis showed that after accounting for small-studies effects by using a newly developed method called PET-PEESE, the ego depletion effect was indistinguishable from zero. However, it is too early to draw such rushing conclusion because of the inappropriate usage of PET-PEESE. The current paper reported a stricter and updated meta-analysis of ego depletion by carefully inspecting problems in the previous meta-analysis, including new studies not covered by it, and testing the effectiveness of each depleting task. The results suggest that attention video should be an ineffective depleting task, whereas emotion video should be the most effective one. Future studies are needed to confirm the effectiveness of each depletion task revealed by the current meta-analysis.", "title": "" }, { "docid": "4d331769ca3f02e9ec96e172d98f3fab", "text": "This review focuses on the most recent applications of zinc oxide (ZnO) nanostructures for tissue engineering. ZnO is one of the most investigated metal oxides, thanks to its multifunctional properties coupled with the ease of preparing various morphologies, such as nanowires, nanorods, and nanoparticles. Most ZnO applications are based on its semiconducting, catalytic and piezoelectric properties. However, several works have highlighted that ZnO nanostructures may successfully promote the growth, proliferation and differentiation of several cell lines, in combination with the rise of promising antibacterial activities. In particular, osteogenesis and angiogenesis have been effectively demonstrated in numerous cases. Such peculiarities have been observed both for pure nanostructured ZnO scaffolds as well as for three-dimensional ZnO-based hybrid composite scaffolds, fabricated by additive manufacturing technologies. Therefore, all these findings suggest that ZnO nanostructures represent a powerful tool in promoting the acceleration of diverse biological processes, finally leading to the formation of new living tissue useful for organ repair.", "title": "" }, { "docid": "d1a24f8662191f34b7c8ad971610b671", "text": "We report a case of subcutaneous infection in a 67 year-old Cambodian man who presented with a 5-month history of swelling of the right foot. Histopathology was compatible with phaeohyphomycosis and the hyphomycete Phialemoniopsis ocularis was identified by the means of morphological and molecular techniques. The patient responded well to a 6-month oral treatment with voriconazole alone.", "title": "" }, { "docid": "5ff7a82ec704c8fb5c1aa975aec0507c", "text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.", "title": "" }, { "docid": "6020b70701164e0a14b435153db1743e", "text": "Supply chain Management has assumed a significant role in firm's performance and has attracted serious research attention over the last few years. In this paper attempt has been made to review the literature on Supply Chain Management. A literature review reveals a considerable spurt in research in theory and practice of SCM. We have presented a literature review for 29 research papers for the period between 2005 and 2011. The aim of this study was to provide an up-to-date and brief review of the SCM literature that was focused on broad areas of the SCM concept.", "title": "" }, { "docid": "c2ab069a9f3efaf212cbfb4a38ffdb8e", "text": "Clustering is a useful technique that organizes a large quantity of unordered text documents into a small number of meaningful and coherent clusters, thereby providing a basis for intuitive and informative navigation and browsing mechanisms. Partitional clustering algorithms have been recognized to be more suitable as opposed to the hierarchical clustering schemes for processing large datasets. A wide variety of distance functions and similarity measures have been used for clustering, such as squared Euclidean distance, cosine similarity, and relative entropy. In this paper, we compare and analyze the effectiveness of these measures in partitional clustering for text document datasets. Our experiments utilize the standard Kmeans algorithm and we report results on seven text document datasets and five distance/similarity measures that have been most commonly used in text clustering.", "title": "" }, { "docid": "ee06da579046d0ad1a83aa90784d8b0c", "text": "Compassion is a positive orientation towards suffering that may be enhanced through compassion training and is thought to influence psychological functioning. However, the effects of compassion training on mindfulness, affect, and emotion regulation are not known. We conducted a randomized controlled trial in which 100 adults from the community were randomly assigned to either a 9-week compassion cultivation training (CCT) or a waitlist (WL) control condition. Participants completed self-report inventories that measured mindfulness, positive and negative affect, and emotion regulation. Compared to WL, CCT resulted in increased mindfulness and happiness, as well as decreased worry and emotional suppression. Within CCT, the amount of formal meditation practiced was related to reductions in worry and emotional suppression. These findings suggest that compassion cultivation training effects cognitive and emotion factors that support psychological flexible and adaptive functioning.", "title": "" }, { "docid": "72066209028301418cc634270ad7f9a9", "text": "BACKGROUND\nTo date, head-to-head trials comparing the efficacy and safety of biological disease-modifying antirheumatic drugs within the same class, including TNF inhibitors, in patients with active rheumatoid arthritis despite methotrexate therapy are lacking. We aimed to compare the efficacy and safety of two different TNF inhibitors and to assess the efficacy and safety of switching to the other TNF inhibitor without a washout period after insufficient primary response to the first TNF inhibitor at week 12.\n\n\nMETHODS\nIn this 104-week, randomised, single-blind (double-blind until week 12 and investigator blind thereafter), parallel-group, head-to-head superiority study (EXXELERATE), eligible patients from 151 centres worldwide were aged 18 years or older with a diagnosis of rheumatoid arthritis at screening, as defined by the 2010 ACR/EULAR criteria, and had prognostic factors for severe disease progression, including a positive rheumatoid factor, or anti-cyclic citrullinated peptide antibody result, or both. Participants were randomly assigned (1:1) via an interactive voice and web response system with no stratification to receive certolizumab pegol plus methotrexate or adalimumab plus methotrexate. All study staff were kept masked throughout the study and participants were masked until week 12. At week 12, patients were classified as responders (by either achieving low disease activity [LDA] according to Disease Activity Score 28-erythrocyte sedimentation rate [DAS28-ESR] ≤3·2 or DAS28-ESR reduction ≥1·2 from baseline) or as non-responders. Non-responders to the first TNF inhibitor to which they were randomised were switched to the other TNF inhibitor with no washout period. Primary endpoints were the percentage of patients achieving a 20% improvement according to the American College of Rheumatology criteria (ACR20) at week 12 and LDA at week 104 (week 12 non-responders were considered LDA non-responders). This study is registered with ClinicalTrials.gov, number NCT01500278.\n\n\nFINDINGS\nBetween Dec 14, 2011, and Nov 11, 2013, 1488 patients were screened of whom 915 were randomly assigned; 457 to certolizumab pegol plus methotrexate and 458 to adalimumab plus methotrexate. No statistically significant difference was observed in ACR20 response at week 12 (314 [69%] of 454 patients and 324 [71%] of 454 patients; odds ratio [OR] 0·90 [95% CI 0·67-1·20]; p=0·467) or DAS28-ESR LDA at week 104 (161 [35%] of 454 patients and 152 [33%] of 454 patients; OR 1·09 [0·82-1·45]; p=0·532) between certolizumab pegol plus methotrexate and adalimumab plus methotrexate, respectively. At week 12, 65 non-responders to certolizumab pegol were switched to adalimumab and 57 non-responders to adalimumab were switched to certolizumab pegol; 33 (58%) of 57 patients switching to certolizumab pegol and 40 (62%) of 65 patients switching to adalimumab responded 12 weeks later by achieving LDA or a DAS28-ESR reduction 1·2 or greater. 389 [75%] of 516 patients who received certolizumab pegol plus methotrexate and 386 [74%] of 523 patients who received adalimumab plus methotrexate reported treatment-emergent adverse events. Three deaths (1%) occurred in each group. No serious infection events were reported in the 70-day period after treatment switch.\n\n\nINTERPRETATION\nThese results show that certolizumab pegol plus methotrexate is not superior to adalimumab plus methotrexate. The data also show the clinical benefit and safety of switching to a second TNF inhibitor without a washout period after primary failure to a first TNF inhibitor.\n\n\nFUNDING\nUCB Pharma.", "title": "" } ]
scidocsrr
e1ee3768df5a989e7aaf61ed66ca7c4d
Learning to Skim Text
[ { "docid": "2f20bca0134eb1bd9d65c4791f94ddcc", "text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "title": "" }, { "docid": "1fe8f55e2d402c5fe03176cbf83a16c3", "text": "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying sequences of binary logic operations, adding sequences of integers, and sorting sequences of real numbers. Overall performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. When applied to character-level language modelling on the Hutter prize Wikipedia dataset, ACT yields intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could be used to infer segment boundaries in sequence data.", "title": "" } ]
[ { "docid": "0d13b8a8f7a4584bc7c1402137e79a2c", "text": "Different methods are proposed to learn phrase embedding, which can be mainly divided into two strands. The first strand is based on the distributional hypothesis to treat a phrase as one non-divisible unit and to learn phrase embedding based on its external context similar to learn word embedding. However, distributional methods cannot make use of the information embedded in component words and they also face data spareness problem. The second strand is based on the principle of compositionality to infer phrase embedding based on the embedding of its component words. Compositional methods would give erroneous result if a phrase is non-compositional. In this paper, we propose a hybrid method by a linear combination of the distributional component and the compositional component with an individualized phrase compositionality constraint. The phrase compositionality is automatically computed based on the distributional embedding of the phrase and its component words. Evaluation on five phrase level semantic tasks and experiments show that our proposed method has overall best performance. Most importantly, our method is more robust as it is less sensitive to datasets.", "title": "" }, { "docid": "8af3b1f6b06ff91dee4473bfb50c420d", "text": "Crowdsensing technologies are rapidly evolving and are expected to be utilized on commercial applications such as location-based services. Crowdsensing collects sensory data from daily activities of users without burdening users, and the data size is expected to grow into a population scale. However, quality of service is difficult to ensure for commercial use. Incentive design in crowdsensing with monetary rewards or gamifications is, therefore, attracting attention for motivating participants to collect data to increase data quantity. In contrast, we propose Steered Crowdsensing, which controls the incentives of users by using the game elements on location-based services for directly improving the quality of service rather than data size. For a feasibility study of steered crowdsensing, we deployed a crowdsensing system focusing on application scenarios of building processes on wireless indoor localization systems. In the results, steered crowdsensing realized deployments faster than non-steered crowdsensing while having half as many data.", "title": "" }, { "docid": "20ebefc5be0e91e15e4773c633624224", "text": "Effects of different levels of Biomin® IMBO synbiotic, including Enterococcus faecium (as probiotic), and fructooligosaccharides (as prebiotic) on survival, growth performance, and digestive enzyme activities of common carp fingerlings (Cyprinus carpio) were evaluated. The experiment was carried out in four treatments (each with 3 replicates), including T1 = control with non-synbiotic diet, T2 = 0.5 g/kg synbiotic diet, T3 = 1 g/kg synbiotic diet, and T4 = 1.5 g/kg synbiotic diet. In total 300 fish with an average weight of 10 ± 1 g were distributed in 12 tanks (25 animals per 300 l) and were fed experimental diets over a period of 60 days. The results showed that synbiotic could significantly enhance growth parameters (weight gain, length gain, specific growth rate, percentage weight gain) (P < 0.05), but did not exhibit any effect on survival rate (P > 0.05) compared with the control. An assay of the digestive enzyme activities demonstrated that the trypsin and chymotrypsin activities of synbiotic groups were considerably increased than those in the control (P < 0.05), but there was no significant difference in the levels of α-amylase, lipase, or alkaline phosphatase (P > 0.05). This study indicated that different levels of synbiotic have the capability to enhance probiotic substitution, to improve digestive enzyme activity which leads to digestive system efficiency, and finally to increase growth. It seems that the studied synbiotic could serve as a good diet supplement for common carp cultures.", "title": "" }, { "docid": "a02cd3bccf9c318f0c7a01fa84bc0f8e", "text": "In the last several years, differential privacy has become the leading framework for private data analysis. It provides bounds on the amount that a randomized function can change as the result of a modification to one record of a database. This requirement can be satisfied by using the exponential mechanism to perform a weighted choice among the possible alternatives, with better options receiving higher weights. However, in some situations the number of possible outcomes is too large to compute all weights efficiently. We present the subsampled exponential mechanism, which scores only a sample of the outcomes. We show that it still preserves differential privacy, and fulfills a similar accuracy bound. Using a clustering application, we show that the subsampled exponential mechanism outperforms a previously published private algorithm and is comparable to the full exponential mechanism but more scalable.", "title": "" }, { "docid": "a1486f866b7db99328b40be2d6e1ba41", "text": "Graphology or Handwriting analysis is a scientific method of identifying, evaluating and understanding of anyone personality through the strokes and pattern revealed by handwriting. Handwriting reveals the true personality including emotional outlay, honesty, fears and defenses and etc. Handwriting stroke reflects the written trace of each individual's rhythm and Style. The image split into two areas: the signature based on three features and application form of letters digit area. In this research performance evaluation is done by calculating mean square error using Back Propagation Neural Network (BPNN).Human behaviour is analyzed on the basis of signature by using neural", "title": "" }, { "docid": "4f6b8ea6fb0884bbcf6d4a6a4f658e52", "text": "Ballistocardiography (BCG) enables the recording of heartbeat, respiration, and body movement data from an unconscious human subject. In this paper, we propose a new heartbeat detection algorithm for calculating heart rate (HR) and heart rate variability (HRV) from the BCG signal. The proposed algorithm consists of a moving dispersion calculation method to effectively highlight the respective heartbeat locations and an adaptive heartbeat peak detection method that can set a heartbeat detection window by automatically predicting the next heartbeat location. To evaluate the proposed algorithm, we compared it with other reference algorithms using a filter, waveform analysis and envelope calculation of signal by setting the ECG lead I as the gold standard. The heartbeat detection in BCG should be able to measure sensitively in the regions for lower and higher HR. However, previous detection algorithms are optimized mainly in the region of HR range (60~90 bpm) without considering the HR range of lower (40~60 bpm) and higher (90~110 bpm) HR. Therefore, we proposed an improved method in wide HR range that 40~110 bpm. The proposed algorithm detected the heartbeat greater stability in varying and wider heartbeat intervals as comparing with other previous algorithms. Our proposed algorithm achieved a relative accuracy of 98.29% with a root mean square error (RMSE) of 1.83 bpm for HR, as well as coverage of 97.63% and relative accuracy of 94.36% for HRV. And we obtained the root mean square (RMS) value of 1.67 for separated ranges in HR.", "title": "" }, { "docid": "87a256b5e67b97cf4a11b5664a150295", "text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.", "title": "" }, { "docid": "2876086e4431e8607d5146f14f0c29dc", "text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.", "title": "" }, { "docid": "ffee60d5f6d862115b7d7d2442e1a1b9", "text": "Preventing accidents caused by drowsiness has become a major focus of active safety driving in recent years. It requires an optimal technique to continuously detect drivers' cognitive state related to abilities in perception, recognition, and vehicle control in (near-) real-time. The major challenges in developing such a system include: 1) the lack of significant index for detecting drowsiness and 2) complicated and pervasive noise interferences in a realistic and dynamic driving environment. In this paper, we develop a drowsiness-estimation system based on electroencephalogram (EEG) by combining independent component analysis (ICA), power-spectrum analysis, correlation evaluations, and linear regression model to estimate a driver's cognitive state when he/she drives a car in a virtual reality (VR)-based dynamic simulator. The driving error is defined as deviations between the center of the vehicle and the center of the cruising lane in the lane-keeping driving task. Experimental results demonstrate the feasibility of quantitatively estimating drowsiness level using ICA-based multistream EEG spectra. The proposed ICA-based method applied to power spectrum of ICA components can successfully (1) remove most of EEG artifacts, (2) suggest an optimal montage to place EEG electrodes, and estimate the driver's drowsiness fluctuation indexed by the driving performance measure. Finally, we present a benchmark study in which the accuracy of ICA-component-based alertness estimates compares favorably to scalp-EEG based.", "title": "" }, { "docid": "d4da4c9bc129a15a8f7b7094216bc4b2", "text": "This paper presents a physical description of two specific aspects in drain-extended MOS transistors, i.e., quasi-saturation and impact-ionization effects. The 2-D device simulator Medici provides the physical insights, and both the unique features are originally attributed to the Kirk effect. The transistor dc model is derived from regional analysis of carrier transport in the intrinsic MOS and the drift region. The substrate-current equations, considering extra impact-ionization factors in the drift region, are also rigorously derived. The proposed model is primarily validated by MATLAB program and exhibits excellent scalability for various transistor dimensions, drift-region doping concentration, and voltage-handling capability.", "title": "" }, { "docid": "9f066ec1613ebea914e635c3505a2728", "text": "Class imbalance is often a problem in various real-world data sets, where one class (i.e. the minority class) contains a small number of data points and the other (i.e. the majority class) contains a large number of data points. It is notably difficult to develop an effective model using current data mining and machine learning algorithms without considering data preprocessing to balance the imbalanced data sets. Random undersampling and oversampling have been used in numerous studies to ensure that the different classes contain the same number of data points. A classifier ensemble (i.e. a structure containing several classifiers) can be trained on several different balanced data sets for later classification purposes. In this paper, we introduce two undersampling strategies in which a clustering technique is used during the data preprocessing step. Specifically, the number of clusters in the majority class is set to be equal to the number of data points in the minority class. The first strategy uses the cluster centers to represent the majority class, whereas the second strategy uses the nearest neighbors of the cluster centers. A further study was conducted to examine the effect on performance of the addition or deletion of 5 to 10 cluster centers in the majority class. The experimental results obtained using 44 small-scale and 2 large-scale data sets revealed that the clustering-based undersampling approach with the second strategy outperformed five state-of-the-art approaches. Specifically, this approach combined with a single multilayer perceptron classifier and C4.5 decision tree classifier ensembles delivered optimal performance over both smalland large-scale data sets. © 2017 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "f31fa4bfc30cc4f0eff4399d16a077dd", "text": "BACKGROUND:Immunohistochemistry allowed recent recognition of a distinct focal gastritis in Crohn's disease. Following reports of lymphocytic colitis and small bowel enteropathy in children with regressive autism, we aimed to see whether similar changes were seen in the stomach. We thus studied gastric antral biopsies in 25 affected children, in comparison to 10 with Crohn's disease, 10 with Helicobacter pylori infection, and 10 histologically normal controls. All autistic, Crohn's, and normal patients were H. pylori negative.METHODS:Snap-frozen antral biopsies were stained for CD3, CD4, CD8, γδ T cells, HLA-DR, IgG, heparan sulphate proteoglycan, IgM, IgA, and C1q. Cell proliferation was assessed with Ki67.RESULTS:Distinct patterns of gastritis were seen in the disease states: diffuse, predominantly CD4+ infiltration in H. pylori, and focal-enhanced gastritis in Crohn's disease and autism, the latter distinguished by striking dominance of CD8+ cells, together with increased intraepithelial lymphocytes in surface, foveolar and glandular epithelium. Proliferation of foveolar epithelium was similarly increased in autism, Crohn's disease and H. pylori compared to controls. A striking finding, seen only in 20/25 autistic children, was colocalized deposition of IgG and C1q on the subepithelial basement membrane and the surface epithelium.CONCLUSIONS:These findings demonstrate a focal CD8-dominated gastritis in autistic children, with novel features. The lesion is distinct from the recently recognized focal gastritis of Crohn's disease, which is not CD8-dominated. As in the small intestine, there is epithelial deposition of IgG.", "title": "" }, { "docid": "1708974f940677a9242d23d12e02046d", "text": "Previous algorithms for supervised sequence learning are based on dynamic recurrent networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: The first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly. The method offers the potential for STM storage efficiency: A single weight (instead of a full-fledged unit) may be sufficient for storing temporal information. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. One experiment shows how the system can be used for adaptive temporary variable binding.", "title": "" }, { "docid": "df679dcd213842a786c1ad9587c66f77", "text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in", "title": "" }, { "docid": "90fdac33a73d1615db1af0c94016da5b", "text": "AIM OF THE STUDY\nThe purpose of this study was to define antidiabetic effects of fruit of Vaccinium arctostaphylos L. (Ericaceae) which is traditionally used in Iran for improving of health status of diabetic patients.\n\n\nMATERIALS AND METHODS\nFirstly, we examined the effect of ethanolic extract of Vaccinium arctostaphylos fruit on postprandial blood glucose (PBG) after 1, 3, 5, 8, and 24h following a single dose administration of the extract to alloxan-diabetic male Wistar rats. Also oral glucose tolerance test was carried out. Secondly, PBG was measured at the end of 1, 2 and 3 weeks following 3 weeks daily administration of the extract. At the end of treatment period the pancreatic INS and cardiac GLUT-4 mRNA expression and also the changes in the plasma lipid profiles and antioxidant enzymes activities were assessed. Finally, we examined the inhibitory activity of the extract against rat intestinal α-glucosidase.\n\n\nRESULTS\nThe obtained results showed mild acute (18%) and also significant chronic (35%) decrease in the PBG, significant reduction in triglyceride (47%) and notable rising of the erythrocyte superoxide dismutase (57%), glutathione peroxidase (35%) and catalase (19%) activities due to treatment with the extract. Also we observed increased expression of GLUT-4 and INS genes in plant extract treated Wistar rats. Furthermore, in vitro studies displayed 47% and 56% inhibitory effects of the extract on activity of intestinal maltase and sucrase enzymes, respectively.\n\n\nCONCLUSIONS\nFindings of this study allow us to establish scientifically Vaccinium arctostaphylos fruit as a potent antidiabetic agent with antihyperglycemic, antioxidant and triglyceride lowering effects.", "title": "" }, { "docid": "ea739d96ee0558fb23f0a5a020b92822", "text": "Text and structural data mining of web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5 October 2008 to 21 March 2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.", "title": "" }, { "docid": "5d848875f6aa3c37898b0ac10b5accca", "text": "Eliciting security requirements Security requirements exist because people and the negative agents that they create (such as computer viruses) pose real threats to systems. Security differs from all other specification areas in that someone is deliberately threatening to break the system. Employing use and misuse cases to model and analyze scenarios in systems under design can improve security by helping to mitigate threats. Some misuse cases occur in highly specific situations, whereas others continually threaten systems. For instance, a car is most likely to be stolen when parked and unattended, whereas a Web server might suffer a denial-of-service attack at any time. You can develop misuse and use cases recursively, going from system to subsystem levels or lower as necessary. Lower-level cases can highlight aspects not considered at higher levels, possibly forcing another analysis. The approach offers rich possibilities for exploring, understanding, and validating the requirements in any direction. Drawing the agents and misuse cases explicitly helps focus attention on the elements of the scenario. Let’s compare Figure 1 to games such as chess or Go. A team’s best strategy consists of thinking ahead to the other team’s best move and acting to block it. In Figure 1, the use cases appear on the left; the misuse cases are on the right. The misuse threat is car theft, the use-case player is the lawful driver, and the misuse-case player the car thief. The driver’s freedom to drive the car is at risk if the thief can steal the car. The driver must be able to lock the car—a derived requirement—to mitigate the threat. This is at the top level of analysis. The next level begins when you consider the thief’s response. If he breaks the door lock and shorts the ignition, this requires another mitigating approach, such as locking the transmission. In this focus", "title": "" }, { "docid": "162f080444935117c5125ae8b7c3d51e", "text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1", "title": "" }, { "docid": "2578607ec2e7ae0d2e34936ec352ff6e", "text": "AI Innovation in Industry is a new department for IEEE Intelligent Systems, and this paper examines some of the basic concerns and uses of AI for big data (AI has been used in several different ways to facilitate capturing and structuring big data, and it has been used to analyze big data for key insights).", "title": "" }, { "docid": "e9497a16e9d12ea837c7a0ec44d71860", "text": "This article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner.", "title": "" } ]
scidocsrr
849e99f03d32d34bc5cb0416f88d33ab
iBOAT: Isolation-Based Online Anomalous Trajectory Detection
[ { "docid": "4f6638d19d3c4ba3ac970007e41a3682", "text": "A novel learning framework is proposed for anomalous behaviour detection in a video surveillance scenario, so that a classifier which distinguishes between normal and anomalous behaviour patterns can be incrementally trained with the assistance of a human operator. We consider the behaviour of pedestrians in terms of motion trajectories, and parametrise these trajectories using the control points of approximating cubic spline curves. This paper demonstrates an incremental semi-supervised one-class learning procedure in which unlabelled trajectories are combined with occasional examples of normal behaviour labelled by a human operator. This procedure is found to be effective on two different datasets, indicating that a human operator could potentially train the system to detect anomalous behaviour by providing only occasional interventions (a small percentage of the total number of observations).", "title": "" }, { "docid": "ac017c1882e385eab6d9d6fae1db6ac7", "text": "Advances in GPS tracking technology have enabled us to install GPS tracking devices in city taxis to collect a large amount of GPS traces under operational time constraints. These GPS traces provide unparallel opportunities for us to uncover taxi driving fraud activities. In this paper, we develop a taxi driving fraud detection system, which is able to systematically investigate taxi driving fraud. In this system, we first provide functions to find two aspects of evidences: travel route evidence and driving distance evidence. Furthermore, a third function is designed to combine the two aspects of evidences based on Dempster-Shafer theory. To implement the system, we first identify interesting sites from a large amount of taxi GPS logs. Then, we propose a parameter-free method to mine the travel route evidences. Also, we introduce route mark to represent a typical driving path from an interesting site to another one. Based on route mark, we exploit a generative statistical model to characterize the distribution of driving distance and identify the driving distance evidences. Finally, we evaluate the taxi driving fraud detection system with large scale real-world taxi GPS logs. In the experiments, we uncover some regularity of driving fraud activities and investigate the motivation of drivers to commit a driving fraud by analyzing the produced taxi fraud data.", "title": "" }, { "docid": "74c489f81af1fb3a260d453399520497", "text": "Modern machine learning techniques provide robust approaches for data-driven modeling and critical information extraction, while human experts hold the advantage of possessing high-level intelligence and domain-specific expertise. We combine the power of the two for anomaly detection in GPS data by integrating them through a visualization and human-computer interaction interface. In this paper we introduce GPSvas (GPS Visual Analytics System), a system that detects anomalies in GPS data using the approach of visual analytics: a conditional random field (CRF) model is used as the machine learning component for anomaly detection in streaming GPS traces. A visualization component and an interactive user interface are built to visualize the data stream, display significant analysis results (i.e., anomalies or uncertain predications) and hidden information extracted by the anomaly detection model, which enable human experts to observe the real-time data behavior and gain insights into the data flow. Human experts further provide guidance to the machine learning model through the interaction tools; the learning model is then incrementally improved through an active learning procedure.", "title": "" }, { "docid": "a2f7ffef3aa5827d4600bd06b0176a29", "text": "GPS-equipped taxis can be viewed as pervasive sensors and the large-scale digital traces produced allow us to reveal many hidden \"facts\" about the city dynamics and human behaviors. In this paper, we aim to discover anomalous driving patterns from taxi's GPS traces, targeting applications like automatically detecting taxi driving frauds or road network change in modern cites. To achieve the objective, firstly we group all the taxi trajectories crossing the same source destination cell-pair and represent each taxi trajectory as a sequence of symbols. Secondly, we propose an Isolation-Based Anomalous Trajectory (iBAT) detection method and verify with large scale taxi data that iBAT achieves remarkable performance (AUC>0.99, over 90% detection rate at false alarm rate of less than 2%). Finally, we demonstrate the potential of iBAT in enabling innovative applications by using it for taxi driving fraud detection and road network change detection.", "title": "" } ]
[ { "docid": "8c065f91d367b738c57c10d79f43618f", "text": "Conversational agents aim to offer an alternative to traditional methods for humans to engage with technology. This can mean to reduce the effort to complete a task using reasoning capabilities and by exploiting context, or allow voice interaction when traditional methods are not available or inconvenient. This paper introduces Foodie Fooderson, a conversational kitchen assistant built using IBM Watson technology. The aim of Foodie is to minimize food wastage by optimizing the use of groceries and assist families in improving their eating habits through recipe recommendations taking into account personal context, such as allergies and dietary goals, while helping reduce food waste and managing grocery budgets. This paper discusses Foodie’s architecture, use and benefits. Foodie uses services from CAPRecipes—our context-aware personalized recipe recommender system, SmarterContext—our personal context management system, and selected publicly available nutrition databases. Foodie reasons using IBM Watson’s conversational services to recognize users’ intents and understand events related to the users and their context. We also discuss our experiences in building conversational agents with Watson, including desired features that may improve the development experience with Watson for creating rich conversations in this exciting era of cognitive computing.", "title": "" }, { "docid": "77d0786af4c5eee510a64790af497e25", "text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.", "title": "" }, { "docid": "387e02e65ff994691ae8ae95b7c7f69c", "text": "Real world data sets usually have many features, which increases the complexity of data mining task. Feature selection, as a preprocessing step to the data mining, has been shown very effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving comprehensibility. To find the optimal feature subsets is the aim of feature selection. Rough sets theory provides a mathematical approach to find optimal feature subset, but this approach is time consuming. In this paper, we propose a novel heuristic algorithm based on rough sets theory to find out the feature subset. This algorithm employs appearing frequency of attribute as heuristic information. Experiment results show in most times our algorithm can find out optimal feature subset quickly and efficiently.", "title": "" }, { "docid": "db54908608579efd067853fed5d3e4e8", "text": "The detection of moving objects from stationary cameras is usually approached by background subtraction, i.e. by constructing and maintaining an up-to-date model of the background and detecting moving objects as those that deviate from such a model. We adopt a previously proposed approach to background subtraction based on self-organization through artificial neural networks, that has been shown to well cope with several of the well known issues for background maintenance. Here, we propose a spatial coherence variant to such approach to enhance robustness against false detections and formulate a fuzzy model to deal with decision problems typically arising when crisp settings are involved. We show through experimental results and comparisons that higher accuracy values can be reached for color video sequences that represent typical situations critical for moving object detection.", "title": "" }, { "docid": "e753dd196255cb5df11bf91f172b71aa", "text": "We propose a parallel-data-free voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is general purpose, high quality, and parallel-data free and works without any extra data, modules, or alignment procedure. It also avoids over-smoothing, which occurs in many conventional statistical model-based VC methods. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from unpaired data. Furthermore, the adversarial loss contributes to reducing over-smoothing of the converted feature sequence. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a parallel-data-free VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture modelbased method under advantageous conditions with parallel and twice the amount of data.", "title": "" }, { "docid": "18252c7ff1b73eba07a35a68e4bcffd7", "text": "This paper addresses the phenomenon of event composition: t he derivation of a single event description expressed in one clause from two le xical heads which could have been used in the description of independent events, eac h expressed in a distinct clause. In English, this phenomenon is well attested with re spect to sentences whose verb is found in combination with an XP describing a result no t strictly lexically entailed by this verb, as in (1).", "title": "" }, { "docid": "2d0a82799d75c08f288d1105280a6d60", "text": "The increasing complexity of deep learning architectures is resulting in training time requiring weeks or even months. This slow training is due in part to \"vanishing gradients,\" in which the gradients used by back-propagation are extremely large for weights connecting deep layers (layers near the output layer), and extremely small for shallow layers (near the input layer), this results in slow learning in the shallow layers. Additionally, it has also been shown that in highly non-convex problems, such as deep neural networks, there is a proliferation of high-error low curvature saddle points, which slows down learning dramatically [1]. In this paper, we attempt to overcome the two above problems by proposing an optimization method for training deep neural networks which uses learning rates which are both specific to each layer in the network and adaptive to the curvature of the function, increasing the learning rate at low curvature points. This enables us to speed up learning in the shallow layers of the network and quickly escape high-error low curvature saddle points. We test our method on standard image classification datasets such as MNIST, CIFAR10 and ImageNet, and demonstrate that our method increases accuracy as well as reduces the required training time over standard algorithms.", "title": "" }, { "docid": "72d50e1c72eb8847e342a53bee32103e", "text": "3D shape retrieval may find the existing models as reference for design reuse. 3D segmentation decomposes models into new elements with large granularity and salient shapes to replace the faces in a solid model. In this way, it may reduce the complexity of a CAD model and make a local salient shape more prominent. Therefore, a retrieval method for 3D CAD solid models based on region segmentation is proposed in this paper. To deal with the problems of poor efficiency and uncertain results, a three-step segmentation method for CAD solid models is introduced. First, face adjacency graph (FAG) descriptions for query models and data models are created from their B-rep models. Second, the FAGs are segmented into a set of convex, concave and planar regions, and the relations among the regions are represented with a region graph. Finally, the sub-graphs are combined recursively to form optimal region sub-graphs with respect to an objective function through an optimal procedure. To avoid using complex graph matching or sub-graph matching for model shape comparison, region property codes are introduced to represent face regions in a CAD model. The similarity between the two compared models is evaluated by comparing their region property codes. The experiments show that the proposed method supports 3D CAD solid model retrieval.", "title": "" }, { "docid": "ae20a0ba3b3a5d95a716025391acd1a4", "text": "This paper summarizes authors' experience with the operation of both versions of autonomous humanoid robot Pepper. The robot's construction, as well as its capabilities and limitations are discussed and compared to the NAO robot. Practical background of working with Pepper robots and several years of experience with NAO, result in specific know-how, which the authors would like to share in this article. It reviews not only the robots' technical aspects, but also practical use-cases that the robot has proven to be perfect for (or not).", "title": "" }, { "docid": "e81eb7f9b8e1f3b314d87b3facfac0c8", "text": "In this paper we propose a novel learning framework called Supervised and Weakly Supervised Learning where the goal is to learn simultaneously from weakly and strongly labeled data. Strongly labeled data can be simply understood as fully supervised data where all labeled instances are available. In weakly supervised learning only data is weakly labeled which prevents one from directly applying supervised learning methods. Our proposed framework is motivated by the fact that a small amount of strongly labeled data can give considerable improvement over only weakly supervised learning. The primary problem domain focus of this paper is acoustic event and scene detection in audio recordings. We first propose a naive formulation for leveraging labeled data in both forms. We then propose a more general framework for Supervised and Weakly Supervised Learning (SWSL). Based on this general framework, we propose a graph based approach for SWSL. Our main method is based on manifold regularization on graphs in which we show that the unified learning can be formulated as a constraint optimization problem which can be solved by iterative concave-convex procedure (CCCP). Our experiments show that our proposed framework can address several concerns of audio content analysis using weakly labeled data.", "title": "" }, { "docid": "3b9af99b33c15188a8ec50c7decd3b28", "text": "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5% on BDDS (drive-cam videos) in an unsupervised setting.", "title": "" }, { "docid": "7004293690fe2fcc2e8880d08de83e7c", "text": "Hidradenitis suppurativa (HS) is a challenging skin disease with limited therapeutic options. Obesity and metabolic syndrome are being increasingly implicated and associated with younger ages and greater metabolic severity. A 19-year-old female with an 8-year history of progressively debilitating cicatricial HS disease presented with obesity, profound anemia, leukocytosis, increased platelet count, hypoalbuminemia, and elevated liver enzymes. A combination of metformin, liraglutide, levonorgestrel-ethinyl estradiol, dapsone, and finasteride was initiated. Acute antibiotic use for recurrences and flares could be slowly discontinued. Over the course of 3 years on this regimen, the liver enzymes normalized in 1 year, followed in2 years by complete resolution of the majority of the hematological and metabolic abnormalities. The sedimentation rate reduced from over 120 to 34 mm/h. She required 1 surgical intervention for perianal disease after 9 months on the regimen. Flares greatly diminished in intensity and duration, with none in the past 6 months. Right axillary lesions have completely healed with residual disease greatly reduced. Chiefly abdominal lesions are persistent. She was able to complete high school from home, start a job, and resume a normal life. Initial weight loss of 40 pounds was not maintained. The current regimen is being well tolerated and continued.", "title": "" }, { "docid": "943667ea2f62ca74a3daae85262a03ab", "text": "Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection. Most existing AU detection works often treat face alignment as a preprocessing and handle the two tasks independently. In this paper, we propose a novel end-to-end deep learning framework for joint AU detection and face alignment, which has not been explored before. In particular, multi-scale shared features are learned firstly, and highlevel features of face alignment are fed into AU detection. Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively. Finally, the assembled local features are integrated with face alignment features and global features for AU detection. Experiments on BP4D and DISFA benchmarks demonstrate that our framework significantly outperforms the state-of-the-art methods for AU detection.", "title": "" }, { "docid": "5deaf3ef06be439ad0715355d3592cff", "text": "Hybrid reconfigurable logic circuits were fabricated by integrating memristor-based crossbars onto a foundry-built CMOS (complementary metal-oxide-semiconductor) platform using nanoimprint lithography, as well as materials and processes that were compatible with the CMOS. Titanium dioxide thin-film memristors served as the configuration bits and switches in a data routing network and were connected to gate-level CMOS components that acted as logic elements, in a manner similar to a field programmable gate array. We analyzed the chips using a purpose-built testing system, and demonstrated the ability to configure individual devices, use them to wire up various logic gates and a flip-flop, and then reconfigure devices.", "title": "" }, { "docid": "3b8e5fac9b2a2be74ad59f89c7152b44", "text": "Many previous papers have lamented the fact that the findings of past GSS research have been inconsistent. This paper develops a new model for interpreting GSS effects on performance (a Fit-Appropriation Model), which argues that GSS performance is affected by two factors. The first is the fit between the task and the GSS structures selected for use (i.e., communication support and information processing support). The second is the appropriation support the group receives in the form of training, facilitation, and software restrictiveness to help them effectively incorporate the selected GSS structures into their meeting process. A meta-analysis using this model to organize and classify past research found that when used appropriately (i.e., there is a fit between the GSS structures and the task and the group receives appropriation support), GSS use increased the number of ideas generated, took less time, and led to more satisfied participants than if the group worked without the GSS. Fitting the GSS to the task had the most impact on outcome effectiveness (decision quality and ideas), while appropriation support had the most impact on the process (time required and process satisfaction). We conclude that when using this theoretical lens, the results of GSS research do not appear inconsistent.", "title": "" }, { "docid": "a0bffde41cdcda7d5b17cb17a6c333e2", "text": "Wearable robots can assist people with disabilities to perform their daily tasks. However, the size, weight and wearability are important factors in the design because it is worn by the person controlling it. Various disabilities can be assisted with wearable robot technology, from the lower to upper body. The hand is an important part of the body for the disabled to perform their daily tasks. However, compared to the arms or legs, the degree of freedom is much higher, which makes it difficult to fabricate a compact wearable robot. We propose a frameless structure and modified differential mechanism to make the wearable robot compact. For the evaluation and control, it is necessary to analyze the friction force because the mechanism we proposed delivers power through more tube than the previous tendon tube transmission. Different from the previous friction model, we consider the friction at the edge of tube ends. This paper contains the design concept of the developed wearable robotic hand and its friction characteristics.", "title": "" }, { "docid": "64fddaba616a01558f3534ee723883cb", "text": "We demonstrate 70.4 Tb/s transmission over 7,600 km with C+L band EDFAs using coded modulation with hybrid probabilistic and geometrical constellation shaping. We employ multi-stage nonlinearity compensation including DBP, fast LMS equalizer and generalized filter.", "title": "" }, { "docid": "ac46286c7d635ccdcd41358666026c12", "text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.", "title": "" }, { "docid": "a4d45c12ecc459ea6564fb0df8d13bd3", "text": "Amazon’s Mechanical Turk (AMT) has revolutionized data processing and collection in both research and industry and remains one of the most prominent paid crowd work platforms today (Kittur et al., 2013). Unfortunately, it also remains in beta nine years after its launch with many of the same limitations as when it was launched: lack of worker profi indicating skills or experience, inability to post worker or employer ratings and reviews, minimal infrastructure for eff ely managing workers or collecting analytics, etc. Difficulty accomplishing quality, complex work with AMT continues to drive active research. Fortunately, many other alternative platforms now exist and off a wide range of features and workflow models for accomplishing quality work (crowdsortium.org). Despite this, research on crowd work has continued to focus on AMT near-exclusively. By analogy, if one had only ever programmed in Basic, how might this limit one’s conception of programming? What if the only search engine we knew was AltaVista? Adar (2011) opined that prior research has often been envisioned too narrowly for AMT, “...writing the user’s manual for MTurk ... struggl[ing] against the limits of the platform...”. Such narrow focus risks AMT’s particular vagaries and limitations unduly shape research questions, methodology, and imagination. To assess the extent of AMT’s infl upon research questions and use, we review its impact on prior work, assess what functionality and workflows other platforms off and consider what light other platforms’ diverse capabilities may shed on current research practices and future directions. To this end, we present a qualitative content analysis (Mayring, 2000) of ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk. To characterize and diff entiate crowd work platforms, we identify several key categories for analysis. Our qualitative content analysis assesses each platform by drawing upon a variety of information sources: Webpages, blogs, news articles, white papers, and research papers. We also shared our analyses with platform representatives and incorporated their feedback. Contributions. Our content analysis of crowd work platforms represents the first such study we know of by researchers for researchers, with categories of analysis chosen based on research relevance. Contributions include our review of how AMT assumptions and limitations have influenced prior research, the detailed criteria we developed for characterizing crowd work platforms, and our analysis. Findings inform", "title": "" } ]
scidocsrr
1566248c0b3ad736d0a92458749a48f9
The new threat of digital marketing.
[ { "docid": "7560af7ed6d3a2ca48c7be047e90ac47", "text": "In the domain of computer games, research into the interaction between player and game has centred on 'enjoyment', often drawing in particular on optimal experience research and Csikszentmihalyi's 'Flow theory'. Flow is a well-established construct for examining experience in any setting and its application to game-play is intuitive. Nevertheless, it's not immediately obvious how to translate between the flow construct and an operative description of game-play. Previous research has attempted this translation through analogy. In this article we propose a practical, integrated approach for analysis of the mechanics and aesthetics of game-play, which helps develop deeper insights into the capacity for flow within games.\n The relationship between player and game, characterized by learning and enjoyment, is central to our analysis. We begin by framing that relationship within Cowley's user-system-experience (USE) model, and expand this into an information systems framework, which enables a practical mapping of flow onto game-play. We believe this approach enhances our understanding of a player's interaction with a game and provides useful insights for games' researchers seeking to devise mechanisms to adapt game-play to individual players.", "title": "" } ]
[ { "docid": "e9dc7d048b53ec9649dec65e05a77717", "text": "Recent advances in object detection have exploited object proposals to speed up object searching. However, many of existing object proposal generators have strong localization bias or require computationally expensive diversification strategies. In this paper, we present an effective approach to address these issues. We first propose a simple and useful localization bias measure, called superpixel tightness. Based on the characteristics of superpixel tightness distribution, we propose an effective method, namely multi-thresholding straddling expansion (MTSE) to reduce localization bias via fast diversification. Our method is essentially a box refinement process, which is intuitive and beneficial, but seldom exploited before. The greatest benefit of our method is that it can be integrated into any existing model to achieve consistently high recall across various intersection over union thresholds. Experiments on PASCAL VOC dataset demonstrates that our approach improves numerous existing models significantly with little computational overhead.", "title": "" }, { "docid": "074d9b68f1604129bcfdf0bb30bbd365", "text": "This paper describes a methodology for semi-supervised learning of dialogue acts using the similarity between sentences. We suppose that the dialogue sentences with the same dialogue act are more similar in terms of semantic and syntactic information. However, previous work on sentence similarity mainly modeled a sentence as bag-of-words and then compared different groups of words using corpus-based or knowledge-based measurements of word semantic similarity. Novelly, we present a vector-space sentence representation, composed of word embeddings, that is, the related word distributed representations, and these word embeddings are organised in a sentence syntactic structure. Given the vectors of the dialogue sentences, a distance measurement can be well-defined to compute the similarity between them. Finally, a seeded k-means clustering algorithm is implemented to classify the dialogue sentences into several categories corresponding to particular dialogue acts. This constitutes the semi-supervised nature of the approach, which aims to ameliorate the reliance of the availability of annotated corpora. Experiments with Switchboard Dialog Act corpus show that classification accuracy is improved by 14%, compared to the state-of-art methods based on Support Vector Machine.", "title": "" }, { "docid": "80c745ee8535d9d53819ced4ad8f996d", "text": "Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR).", "title": "" }, { "docid": "eadba0f4aa52b20b0a512cc3d869146d", "text": "This paper first describes the phenomenon of Gaussian pulse spread due to numerical dispersion in the finite-difference time-domain (FDTD) method for electromagnetic computation. This effect is undesired, as it reduces the precision with which multipath pulses can be resolved in the time domain. The quantification of the pulse spread is thus useful to evaluate the accuracy of pulsed FDTD simulations. Then, using a linear approximation to the numerical phase delay, a formula to predict the pulse duration is developed. Later, this formula is used to design a Gaussian source that keeps the spread of numerical pulses bounded in wideband FDTD. Finally, the developed model and the approximation are validated via simulations.", "title": "" }, { "docid": "d2b7ff4fc41610013b98a70fc32c8176", "text": "Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.", "title": "" }, { "docid": "f267c096ffe69c40b5bd987450cdde84", "text": "Recent breakthroughs in cryptanalysis of standard hash functions like SHA-1 and MD5 raise the need for alternatives. The MD6 hash function is developed by a team led by Professor Ronald L. Rivest in response to the call for proposals for a SHA-3 cryptographic hash algorithm by the National Institute of Standards and Technology. The hardware performance evaluation of hash chip design mainly includes efficiency and flexibility. In this paper, a RAM-based reconfigurable FPGA implantation of the MD6-224/256/384 /512 hash function is presented. The design achieves a throughput ranges from 118 to 227 Mbps at the maximum frequency of 104MHz on low-cost Cyclone III device. The implementation of MD6 core functionality uses mainly embedded Block RAMs and small resources of logic elements in Altera FPGA, which satisfies the needs of most embedded applications, including wireless communication. The implementation results also show that the MD6 hash function has good reconfigurability.", "title": "" }, { "docid": "cb00e564a81ace6b75e776f1fe41fb8f", "text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30", "title": "" }, { "docid": "0ff8c4799b62c70ef6b7d70640f1a931", "text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "9ea0612f646228a3da41b7f55c23e825", "text": "It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent’s semantic perturbations (e.g., antonyms), we jointly improve the model’s semantic-relationship learning capabilities in addition to our AddSentDiversebased adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.", "title": "" }, { "docid": "b10b7726ee76355e2ea3aded3c356178", "text": "In this paper, a planar 180 phase-reversal T-junction and a modified magic-T using substrate integrated waveguide (SIW) and slotline are proposed and developed for RF/microwave applications on the basis of the substrate integrated circuits concept. In this case, slotline is used to generate the odd-symmetric field pattern of the SIW in the phase-reverse T-junction. Measured results indicate that 0.3-dB amplitude imbalance and 3 phase imbalance can be achieved for the proposed 180 phase-reversal T-junction over the entire -band. The modified narrowband and optimized wideband magic-T are developed and fabricated, respectively. Measured results of all those circuits agree well with their simulated ones. Finally, as an application demonstration of our proposed magic-T, a singly balanced mixer based on this structure is designed and measured with good performances.", "title": "" }, { "docid": "c749e0a0ae26f95bd8baedfa6e8c5f05", "text": "This paper proposes a new polynomial time constant factor approximation algorithm for a more-a-decade-long open NP-hard problem, the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in unit disk graph UDG with any positive integer <inline-formula> <tex-math notation=\"LaTeX\">$m \\geq 1$ </tex-math></inline-formula> for the first time in the literature. We observe that it is difficult to modify the existing constant factor approximation algorithm for the minimum three-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem to solve the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG due to the structural limitation of Tutte decomposition, which is the main graph theory tool used by Wang <i>et al.</i> to design their algorithm. To resolve this issue, we first reinvent a new constant factor approximation algorithm for the minimum three-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG and later use this algorithm to design a new constant factor approximation algorithm for the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG.", "title": "" }, { "docid": "8a1e2eddd9107412bd0d34bfde73322d", "text": "The aim of this meta-analysis was to compare social desirability scores between paper and computer surveys. Subgroup analyses were conducted with Internet connectivity, level of anonymity, individual or group test setting, possibility of skipping items, possibility of backtracking previous items, inclusion of questions of sensitive nature, and social desirability scale type as moderators. Subgroup analyses were also conducted for study characteristics, namely the randomisation of participants, sample type (students vs. other), and study design (betweenvs. within-subjects). Social desirability scores between the two administration modes were compared for 51 studies that included 62 independent samples and 16,700 unique participants. The overall effect of administration mode was close to zero (Cohen’s d = 0.00 for fixed-effect and d = −0.01 for random-effects meta-analysis). The majority of the effect sizes in the subgroup analyses were not significantly different from zero either. The effect sizes were close to zero for both Internet and offline surveys. In conclusion, the totality of evidence indicates that there is no difference in social desirability between paper-and-pencil surveys and computer surveys. Publication year and sample size were positively correlated (ρ = .64), which suggests that certain of the large effects that have been found in the past may have been due to sampling error.", "title": "" }, { "docid": "7ba3a9bec79bea7fd7d66aafc0a1036b", "text": "Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate that these systems are robust, have low error rates, and are tamper proof. We present a high-level categorization of the various vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In particular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intrauser variability in the acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as biometric systems are beginning to proliferate into the core physical and information infrastructure of our society.", "title": "" }, { "docid": "aff21d90f9a844c31989f8548e06425d", "text": "In this digital day and age, we are becoming increasingly dependent on multimedia content, especially digital images and videos, to provide a reliable proof of occurrence of events. However, the availability of several sophisticated yet easy-to-use content editing software has led to great concern regarding the trustworthiness of such content. Consequently, over the past few years, visual media forensics has emerged as an indispensable research field, which basically deals with development of tools and techniques that help determine whether or not the digital content under consideration is authentic, i.e., an actual, unaltered representation of reality. Over the last two decades, this research field has demonstrated tremendous growth and innovation. This paper presents a comprehensive and scrutinizing bibliography addressing the published literature in the field of passive-blind video content authentication, with primary focus on forgery/tamper detection, video re-capture and phylogeny detection, and video anti-forensics and counter anti-forensics. Moreover, the paper intimately analyzes the research gaps found in the literature, provides worthy insight into the areas, where the contemporary research is lacking, and suggests certain courses of action that could assist developers and future researchers explore new avenues in the domain of video forensics. Our objective is to provide an overview suitable for both the researchers and practitioners already working in the field of digital video forensics, and for those researchers and general enthusiasts who are new to this field and are not yet completely equipped to assimilate the detailed and complicated technical aspects of video forensics.", "title": "" }, { "docid": "19f4de5f01f212bf146087d4695ce15e", "text": "Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms. In comparison with existing VO and V-SLAM algorithms, semi-direct visual odometry (SVO) has two main advantages that lead to stateof-the-art frame rate camera motion estimation: direct pixel correspondence and efficient implementation of probabilistic mapping method. This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a singleimage depth prediction network. By significantly reducing the depth uncertainty of the initialized map point (i.e., small variance centred about the depth prediction), the benefits are twofold: reliable feature correspondence between views and fast convergence to the true depth in order to create new map points. We evaluate our method with two outdoor datasets: KITTI dataset and Oxford Robotcar dataset. The experimental results indicate that the improved SVO mapping results in increased robustness and camera tracking accuracy.", "title": "" }, { "docid": "98345001f9c4024a7650ee7940f3b57c", "text": "AIM\nTo evaluate the need for, and the development and utility of, pen-and-paper (Modified) Early Warning Scoring (MEWS/EWS) systems for adult inpatients outside critical care and emergency departments, by reviewing published literature.\n\n\nBACKGROUND\nSerious adverse events can be prevented by recognizing and responding to early signs of clinical and physiological deterioration.\n\n\nEVALUATION\nOf 534 papers reporting MEWS/EWS systems for adult inpatients identified, 14 contained useable data on development and utility of MEWS/EWS systems. Systems without aggregate weighted scores were excluded.\n\n\nKEY ISSUES\nMEWS/EWS systems facilitate recognition of abnormal physiological parameters in deteriorating patients, but have limitations. There is no single validated scoring tool across diagnoses. Evidence of prospective validation of MEWS/EWS systems is limited; neither is implementation based on clinical trials. There is no evidence that implementation of Westernized MEWS/EWS systems is appropriate in resource-poor locations.\n\n\nCONCLUSIONS\nBetter monitoring implies better care, but there is a paucity of data on the validation, implementation, evaluation and clinical testing of vital signs' monitoring systems in general wards.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nRecording vital signs is not enough. Patient safety continues to depend on nurses' clinical judgment of deterioration. Resources are needed to validate and evaluate MEWS/EWS systems in context.", "title": "" }, { "docid": "ed9beb7f6ffc65439f34294dec11a966", "text": "CONTEXT\nA variety of ankle self-stretching exercises have been recommended to improve ankle-dorsiflexion range of motion (DFROM) in individuals with limited ankle dorsiflexion. A strap can be applied to stabilize the talus and facilitate anterior glide of the distal tibia at the talocrural joint during ankle self-stretching exercises. Novel ankle self-stretching using a strap (SSS) may be a useful method of improving ankle DFROM.\n\n\nOBJECTIVE\nTo compare the effects of 2 ankle-stretching techniques (static stretching versus SSS) on ankle DFROM.\n\n\nDESIGN\nRandomized controlled clinical trial.\n\n\nSETTING\nUniversity research laboratory.\n\n\nPATIENTS OR OTHER PARTICIPANTS\nThirty-two participants with limited active dorsiflexion (<20°) while sitting (14 women and 18 men) were recruited.\n\n\nMAIN OUTCOME MEASURE(S)\nThe participants performed 2 ankle self-stretching techniques (static stretching and SSS) for 3 weeks. Active DFROM (ADFROM), passive DFROM (PDFROM), and the lunge angle were measured. An independent t test was used to compare the improvements in these values before and after the 2 stretching interventions. The level of statistical significance was set at α = .05.\n\n\nRESULTS\nActive DFROM and PDFROM were greater in both stretching groups after the 3-week interventions. However, ADFROM, PDFROM, and the lunge angle were greater in the SSS group than in the static-stretching group (P < .05).\n\n\nCONCLUSIONS\nAnkle SSS is recommended to improve ADFROM, PDFROM, and the lunge angle in individuals with limited DFROM.", "title": "" }, { "docid": "2185097978553d5030252ffa9240fb3c", "text": "The concept of celebrity culture remains remarkably undertheorized in the literature, and it is precisely this gap that this article aims to begin filling in. Starting with media culture definitions, celebrity culture is conceptualized as collections of sense-making practices whose main resources of meaning are celebrity. Consequently, celebrity cultures are necessarily plural. This approach enables us to focus on the spatial differentiation between (sub)national celebrity cultures, for which the Flemish case is taken as a central example. We gain a better understanding of this differentiation by adopting a translocal frame on culture and by focusing on the construction of celebrity cultures through the ‘us and them’ binary and communities. Finally, it is also suggested that what is termed cultural working memory improves our understanding of the remembering and forgetting of actual celebrities, as opposed to more historical figures captured by concepts such as cultural memory.", "title": "" }, { "docid": "9a5a2e4918e097794d1b4115c0053d6a", "text": "We report six cases of anastomosing hemangioma of the ovary. All lesions were unilateral and arose in 43 to 81 year old females. In all but one patient, the tumor was asymptomatic and represented incidental finding. The exception was a tumor associated with massive ascites and elevated CA 125. The tumors were, on cut section, spongy and dark violet in color. The size of tumors ranged from 0.5 to 3.5 cm. All lesions showed the same histological features and consisted of capillary sized anastomosing vessels with sinusoid-like pattern intermingled with sporadic medium sized vessels. Interestingly, in all cases there were areas of luteinized cells at the tumor periphery, which ranged from rare small nests to multiple and commonly confluent areas. In one tumor, components of mature adipose tissue were present. Immunohistochemically, all tumors were CD31 and CD34 positive. Other markers examined were negative, including; estrogen receptor, progesterone receptor, androgen receptor, and D2–40. Proliferative activity (Ki-67 index) was very low in all cases. Anastomosing hemangioma is a rare entity, only 8 lesions occurring in ovary has been described from its initial description in 2009. We report six additional cases with their clinicopathological correlation.", "title": "" } ]
scidocsrr
d83091bd771b93790c303ca8b51a82d5
Alert Detection in System Logs
[ { "docid": "dbb9db490ae3c1bb91d22ecd8d679270", "text": "The growing computational and storage needs of several scientific applications mandate the deployment of extreme-scale parallel machines, such as IBM's BlueGene/L, which can accommodate as many as 128K processors. In this paper, we present our experiences in collecting and filtering error event logs from a 8192 processor BlueGene/L prototype at IBM Rochester, which is currently ranked #8 in the Top-500 list. We analyze the logs collected from this machine over a period of 84 days starting from August 26, 2004. We perform a three-step filtering algorithm on these logs: extracting and categorizing failure events; temporal filtering to remove duplicate reports from the same location; and finally coalescing failure reports of the same error across different locations. Using this approach, we can substantially compress these logs, removing over 99.96% of the 828,387 original entries, and more accurately portray the failure occurrences on this system.", "title": "" }, { "docid": "4dc9360837b5793a7c322f5b549fdeb1", "text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering", "title": "" } ]
[ { "docid": "474e7ed8e2629a6d73718de7667a68f0", "text": "The Robot Operating System (ROS) is a set of software libraries and tools used to build robotic systems. ROS is known for a distributed and modular design. Given a model of the environment, task planning is concerned with the assembly of actions into a structure that is predicted to achieve goals. This can be done in a way that minimises costs, such as time or energy. Task planning is vital in directing the actions of a robotic agent in domains where a causal chain could lock the agent into a dead-end state. Moreover, planning can be used in less constrained domains to provide more intelligent behaviour. This paper describes the ROSPLAN framework, an architecture for embedding task planning into ROS systems. We provide a description of the architecture and a case study in autonomous robotics. Our case study involves autonomous underwater vehicles in scenarios that demonstrate the flexibility and robustness of our approach.", "title": "" }, { "docid": "2c8448435e42a825c1295652aa9a61da", "text": "In recent years, the analysis and evaluation of students‟ performance and retaining the standard of education is a very important problem in all the educational institutions. The most important goal of the paper is to analyze and evaluate the school students‟ performance by applying data mining classification algorithms in weka tool. The data mining tool has been generally accepted as a decision making tool to facilitate better resource utilization in terms of students‟ performance. The various classification algorithms could be specifically mentioned as J48, Random Forest, Multilayer Perceptron, IB1 and Decision Table are used. The results of such classification model deals with accuracy level, confusion matrices and also the execution time. Therefore conclusion could be reached that the Random Forest performance is better than that of different algorithms.", "title": "" }, { "docid": "c65c4582aecf22e63e88fc89c38f4bc1", "text": "CONTEXT\nCognitive impairment in late-life depression (LLD) is highly prevalent, disabling, poorly understood, and likely related to long-term outcome.\n\n\nOBJECTIVES\nTo determine the characteristics and determinants of neuropsychological functioning LLD.\n\n\nDESIGN\nCross-sectional study of groups of LLD patients and control subjects.\n\n\nSETTING\nOutpatient, university-based depression research clinic.\n\n\nPARTICIPANTS\nOne hundred patients without dementia 60 years and older who met DSM-IV criteria for current episode of unipolar major depression (nonpsychotic) and 40 nondepressed, age- and education-equated control subjects.\n\n\nMAIN OUTCOME MEASURES\nA comprehensive neuropsychological battery.\n\n\nRESULTS\nRelative to control subjects, LLD patients performed poorer in all cognitive domains. More than half exhibited significant impairment (performance below the 10th percentile of the control group). Information processing speed and visuospatial and executive abilities were the most broadly and frequently impaired. The neuropsychological impairments were mediated almost entirely by slowed information processing (beta =.45-.80). Education (beta =.32) and ventricular atrophy (beta =.28) made additional modest contributions to variance in measures of language ability. Medical and vascular disease burden, apolipoprotein E genotype, and serum anticholinergicity did not contribute to variance in any cognitive domain.\n\n\nCONCLUSIONS\nLate-life depression is characterized by slowed information processing, which affects all realms of cognition. This supports the concept that frontostriatal dysfunction plays a key role in LLD. The putative role of some risk factors was validated (eg, advanced age, low education, depression severity), whereas others were not (eg, medical burden, age at onset of first depressive episode). Further studies of neuropsychological functioning in remitted LLD patients are needed to parse episode-related and persistent factors and to relate them to underlying neural dysfunction.", "title": "" }, { "docid": "3505170ccc81058b75e2073f8080b799", "text": "Indoor Location Based Services (LBS), such as indoor navigation and tracking, still have to deal with both technical and non-technical challenges. For this reason, they have not yet found a prominent position in people’s everyday lives. Reliability and availability of indoor positioning technologies, the availability of up-to-date indoor maps, and privacy concerns associated with location data are some of the biggest challenges to their development. If these challenges were solved, or at least minimized, there would be more penetration into the user market. This paper studies the requirements of LBS applications, through a survey conducted by the authors, identifies the current challenges of indoor LBS, and reviews the available solutions that address the most important challenge, that of providing seamless indoor/outdoor positioning. The paper also looks at the potential of emerging solutions and the technologies that may help to handle this challenge.", "title": "" }, { "docid": "cd014a0fcae02be9fb28c48d6b061c7e", "text": "Human choices are remarkably susceptible to the manner in which options are presented. This so-called \"framing effect\" represents a striking violation of standard economic accounts of human rationality, although its underlying neurobiology is not understood. We found that the framing effect was specifically associated with amygdala activity, suggesting a key role for an emotional system in mediating decision biases. Moreover, across individuals, orbital and medial prefrontal cortex activity predicted a reduced susceptibility to the framing effect. This finding highlights the importance of incorporating emotional processes within models of human choice and suggests how the brain may modulate the effect of these biasing influences to approximate rationality.", "title": "" }, { "docid": "64d53035eb919d5e27daef6b666b7298", "text": "The 3L-NPC (Neutral-Point-Clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies in a STATCOM application. The PSIM simulation results are shown in order to validate the PWM strategies studied for 3L-ANPC converter.", "title": "" }, { "docid": "fac56f5aa781c22104ab0d9ccc02d457", "text": "BACKGROUND\nCurrent guidelines suggest that, for patients at moderate risk of death from unstable coronary-artery disease, either an interventional strategy (angiography followed by revascularisation) or a conservative strategy (ischaemia-driven or symptom-driven angiography) is appropriate. We aimed to test the hypothesis that an interventional strategy is better than a conservative strategy in such patients.\n\n\nMETHODS\nWe did a randomised multicentre trial of 1810 patients with non-ST-elevation acute coronary syndromes (mean age 62 years, 38% women). Patients were assigned an early intervention or conservative strategy. The antithrombin agent in both groups was enoxaparin. The co-primary endpoints were a combined rate of death, non-fatal myocardial infarction, or refractory angina at 4 months; and a combined rate of death or non-fatal myocardial infarction at 1 year. Analysis was by intention to treat.\n\n\nFINDINGS\nAt 4 months, 86 (9.6%) of 895 patients in the intervention group had died or had a myocardial infarction or refractory angina, compared with 133 (14.5%) of 915 patients in the conservative group (risk ratio 0.66, 95% CI 0.51-0.85, p=0.001). This difference was mainly due to a halving of refractory angina in the intervention group. Death or myocardial infarction was similar in both treatment groups at 1 year (68 [7.6%] vs 76 [8.3%], respectively; risk ratio 0.91, 95% CI 0.67-1.25, p=0.58). Symptoms of angina were improved and use of antianginal medications significantly reduced with the interventional strategy (p<0.0001).\n\n\nINTERPRETATION\nIn patients presenting with unstable coronary-artery disease, an interventional strategy is preferable to a conservative strategy, mainly because of the halving of refractory or severe angina, and with no increased risk of death or myocardial infarction.", "title": "" }, { "docid": "507699dcd766679b0527946b63ffc5e2", "text": "For ChinesePOS tagging, word segmentation is a preliminary step. To avoid error propagation and improve segmentation by utilizing POS information, segmentation and tagging can be performed simultaneously. A challenge for this joint approach is the large combined search space, which makes efficient decoding very hard. Recent research has explored the integration of segmentation and POS tagging, by decoding under restricted versions of the full combined search space. In this paper, we propose a joint segmentation and POStagging model that does not impose any hard constraints on the interaction between word and POS information. Fast decoding is achieved by using a novel multiple-beam search algorithm. The system uses a discriminative statistical model, trained using the generalized perceptron algorithm. The joint model gives an error reduction in segmentation accuracy of 14.6% and an error reduction in tagging accuracy of12.2%, compared to the traditional pipeline approach.", "title": "" }, { "docid": "17b66811d671fbe77a935a9028c954ce", "text": "Research in management information systems often examines computer literacy as an independent variable. Study subjects may be asked to self-report their computer literacy and that literacy is then utilized as a research variable. However, it is not known whether self-reported computer literacy is a valid measure of a subject’s actual computer literacy. The research presented in this paper examined the question of whether self-reported computer literacy can be a reliable indication of actual computer literacy and therefore valid for use in empirical research. Study participants were surveyed and asked to self-report their level of computer literacy. Following, subjects were tested to determine an objective measure of computer literacy. The data analysis determined that self-reported computer literacy is not reliable. Results of this research are important for academic programs, for businesses, and for future empirical studies in management information systems.", "title": "" }, { "docid": "b8a7eb324085eef83f88185b9544d5b5", "text": "The research in the area of game accessibility has grown significantly since the last time it was examined in 2005. This paper examines the body of work between 2005 and 2010. We selected a set of papers on topics we felt represented the scope of the field, but were not able to include all papers on the subject. A summary of the research we examined is provided, along with suggestions for future work in game accessibility. It is hoped that this summary will prompt others to perform further research in this area.", "title": "" }, { "docid": "ecdb103e650be2afc4192979a2463af0", "text": "We have developed an F-band (90 to 140 GHz) bidirectional amplifier MMIC using a 75-nm InP HEMT technology for short-range millimeter-wave multi-gigabit communication systems. Inherent symmetric common-gate transistors and parallel circuits consisting of an inductor and a switch realizes a bidirectional operation with a wide bandwidth of over 50 GHz. Small signal gains of 12-15 dB and 9-12 dB were achieved in forward and reverse directions, respectively. Fractional bandwidths of the developed bidirectional amplifier were 39% for the forward direction and 32% for the reverse direction, which were almost double as large as those of conventional bidirectional amplifiers. The power consumption of the bidirectional amplifier was 15 mW under a 2.4-V supply. The chip measures 0.70 × 0.65 mm. The simulated NF is lower than 5 dB, and Psat is larger than 5 dBm. The use of this bidirectional amplifier provides miniaturization of the multi-gigabit communication systems and eliminates signal switching loss.", "title": "" }, { "docid": "5cb4cbcf553da673354ebb325e18339e", "text": "150-200 words) The MIDI Toolbox is a compilation of functions for analyzing and visualizing MIDI files in the Matlab computing environment. In this article, the basic issues of the Toolbox are summarized and demonstrated with examples ranging from melodic contour, similarity, keyfinding, meter-finding to segmentation. The Toolbox is based on symbolic musical data but signal processing methods are applied to cover such aspects of musical behaviour as geometric representations and short-term memory. Besides simple manipulation and filtering functions, the toolbox contains cognitively inspired analytic techniques that are suitable for contextdependent musical analysis, a prerequisite for many music information retrieval applications.", "title": "" }, { "docid": "714641a148e9a5f02bb13d5485203d70", "text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.", "title": "" }, { "docid": "565941db0284458e27485d250493fd2a", "text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.", "title": "" }, { "docid": "569ae662a71c3484e7c53e6cf8dda50d", "text": "Node mobility and end-to-end disconnections in Delay Tolerant Networks (DTNs) greatly impair the effectiveness of data dissemination. Although social-based approaches can be used to address the problem, most existing solutions only focus on forwarding data to a single destination. In this paper, we are the first to study multicast in DTNs from the social network perspective. We study multicast in DTNs with single and multiple data items, investigate the essential difference between multicast and unicast in DTNs, and formulate relay selections for multicast as a unified knapsack problem by exploiting node centrality and social community structures. Extensive trace-driven simulations show that our approach has similar delivery ratio and delay to the Epidemic routing, but can significantly reduce the data forwarding cost measured by the number of relays used.", "title": "" }, { "docid": "74da516d4a74403ac5df760b0b656b1f", "text": "In this paper a novel and effective approach for automated audio classification is presented that is based on the fusion of different sets of features, both visual and acoustic. A number of different acoustic and visual features of sounds are evaluated and compared. These features are then fused in an ensemble that produces better classification accuracy than other state-of-the-art approaches. The visual features of sounds are built starting from the audio file and are taken from images constructed from different spectrograms, a gammatonegram, and a rhythm image. These images are divided into subwindows from which a set of texture descriptors are extracted. For each feature descriptor a different Support Vector Machine (SVM) is trained. The SVMs outputs are summed for a final decision. The proposed ensemble is evaluated on three well-known databases of music genre classification (the Latin Music Database, the ISMIR 2004 database, and the GTZAN genre collection), a dataset of Bird vocalization aiming specie recognition, and a dataset of right whale calls aiming whale detection. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available (https://www.dei.unipd.it/node/2357 +Pattern Recognition and Ensemble Classifiers).", "title": "" }, { "docid": "5e18a7f3eb71f20e3905a17de5e0077c", "text": "Research Article Nancy K. Lankton Marshall University lankton@marshall.edu Harrison D. McKnight Michigan State University mcknight@bus.msu.edu Expectation disconfirmation theory (EDT) posits that expectations, disconfirmation, and performance influence customer satisfaction. While information systems researchers have adopted EDT to explain user information technology (IT) satisfaction, they often use various EDT model subsets. Leaving out one or more key variables, or key relationships among the variables, can reduce EDT’s explanatory potential. It can also suggest an intervention for practice that is very different from (and inferior to) the intervention suggested by a more complete model. Performance is an especially beneficial but largely neglected EDT construct in IT research. Using EDT theory from the marketing literature, this paper explains and demonstrates the incremental value of using the complete IT EDT model with performance versus the simplified model without it. Studying software users, we find that the complete model with performance both reveals assimilation effects for less experienced users and uncovers asymmetric effects not found in the simplified model. We also find that usefulness performance more strongly influences usage continuance intention than does any other EDT variable. We explain how researchers and practitioners can take full advantage of the predictive and explanatory power of the complete IT EDT model.", "title": "" }, { "docid": "9871a5673f042b0565c50295be188088", "text": "Formal security analysis has proven to be a useful tool for tracking modifications in communication protocols in an automated manner, where full security analysis of revisions requires minimum efforts. In this paper, we formally analysed prominent IoT protocols and uncovered many critical challenges in practical IoT settings. We address these challenges by using formal symbolic modelling of such protocols under various adversaries and security goals. Furthermore, this paper extends formal analysis to cryptographic Denial-of-Service (DoS) attacks and demonstrates that a vast majority of IoT protocols are vulnerable to such resource exhaustion attacks. We present a cryptographic DoS attack countermeasure that can be generally used in many IoT protocols. Our study of prominent IoT protocols such as CoAP and MQTT shows the benefits of our approach.", "title": "" }, { "docid": "7fdf51a07383b9004882c058743b5726", "text": "We propose using application specific virtual machines (ASVMs) to reprogram deployed wireless sensor networks. ASVMs provide a way for a user to define an application-specific boundary between virtual code and the VM engine. This allows programs to be very concise (tens to hundreds of bytes), making program installation fast and inexpensive. Additionally, concise programs interpret few instructions, imposing very little interpretation overhead. We evaluate ASVMs against current proposals for network programming runtimes and show that ASVMs are more energy efficient by as much as 20%. We also evaluate ASVMs against hand built TinyOS applications and show that while interpretation imposes a significant execution overhead, the low duty cycles of realistic applications make the actual cost effectively unmeasurable.", "title": "" }, { "docid": "0f8bf207201692ad4905e28a2993ef29", "text": "Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.", "title": "" } ]
scidocsrr
e28c81ce0a1ac636a7c8e3033788a379
Using convolutional features and a sparse autoencoder for land-use scene classification
[ { "docid": "13adeafcb8c1c20e71ca086a0d364e64", "text": "This paper targets learning robust image representation for single training sample per person face recognition. Motivated by the success of deep learning in image representation, we propose a supervised autoencoder, which is a new type of building block for deep architectures. There are two features distinct our supervised autoencoder from standard autoencoder. First, we enforce the faces with variants to be mapped with the canonical face of the person, for example, frontal face with neutral expression and normal illumination; Second, we enforce features corresponding to the same person to be similar. As a result, our supervised autoencoder extracts the features which are robust to variances in illumination, expression, occlusion, and pose, and facilitates the face recognition. We stack such supervised autoencoders to get the deep architecture and use it for extracting features in image representation. Experimental results on the AR, Extended Yale B, CMU-PIE, and Multi-PIE data sets demonstrate that by coupling with the commonly used sparse representation-based classification, our stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network, in spite of much less training data and without any domain information. Moreover, supervised autoencoder can also be used for face verification, which further demonstrates its effectiveness for face representation.", "title": "" }, { "docid": "f87e8f9d733ed60cedfda1cbfe176cbf", "text": "Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods.", "title": "" } ]
[ { "docid": "b1f29f32ecc6aa2404cad271427675f2", "text": "RATIONALE\nAnti-N-methyl-D-aspartate (NMDA) receptor encephalitis is an autoimmune disorder that can be controlled and reversed by immunotherapy. The presentation of NMDA receptor encephalitis varies, but NMDA receptor encephalitis is seldom reported in patients with both bilateral teratomas and preexisting brain injury.\n\n\nPATIENT CONCERNS\nA 28-year-old female with a history of traumatic intracranial hemorrhage presented acute psychosis, seizure, involuntary movement, and conscious disturbance with a fulminant course. Anti-NMDA receptor antibody was identified in both serum and cerebrospinal fluid, confirming the diagnosis of anti-NMDA receptor encephalitis. Bilateral teratomas were also identified during tumor survey. DIAGNOSES:: anti-N-methyl-D-aspartate receptor encephalitis.\n\n\nINTERVENTIONS\nTumor resection and immunotherapy were performed early during the course.\n\n\nOUTCOMES\nThe patient responded well to tumor resection and immunotherapy. Compared with other reports in the literature, her symptoms rapidly improved without further relapse.\n\n\nLESSONS\nThis case report demonstrates that bilateral teratomas may be related to high anybody titers and that the preexisting head injury may be responsible for lowering the threshold of neurological deficits. Early diagnosis and therapy are crucial for a good prognosis in such patients.", "title": "" }, { "docid": "d049a1779a8660f689f1da5daada69dc", "text": "Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.", "title": "" }, { "docid": "f3dc6ab7d2d66604353f60fe1d7bd45a", "text": "Establishing end-to-end authentication between devices and applications in Internet of Things (IoT) is a challenging task. Due to heterogeneity in terms of devices, topology, communication and different security protocols used in IoT, existing authentication mechanisms are vulnerable to security threats and can disrupt the progress of IoT in realizing Smart City, Smart Home and Smart Infrastructure, etc. To achieve end-to-end authentication between IoT devices/applications, the existing authentication schemes and security protocols require a two-factor authentication mechanism. Therefore, as part of this paper we review the suitability of an authentication scheme based on One Time Password (OTP) for IoT and proposed a scalable, efficient and robust OTP scheme. Our proposed scheme uses the principles of lightweight Identity Based Elliptic Curve Cryptography scheme and Lamport's OTP algorithm. We evaluate analytically and experimentally the performance of our scheme and observe that our scheme with a smaller key size and lesser infrastructure performs on par with the existing OTP schemes without compromising the security level. Our proposed scheme can be implemented in real-time IoT networks and is the right candidate for two-factor authentication among devices, applications and their communications in IoT.", "title": "" }, { "docid": "3cb7e63391444766e946b0fbcef2cb28", "text": "Enterprises Information Systems (EIS) have been applied for decades in Computer-Aided Engineering (CAE) and Computer-Aided Design (CAD), where huge amount of increasing data is stored in the heterogeneous and distributed systems. As systems evaluating, system redesign and reengineering are demanded. A facing challenge is how to interoperate among different systems by overcoming the gap of conceptual heterogeneity. In this article, an enlarged data representation called Semantic Information Layer (SIL) is described for facilitating heterogeneous systems interoperable. SIL plays a role as mediation media and knowledge representation among various systems. The SIL building process is based on ontology engineering, including ontology extraction from relational database (RDB), ontology enrichment and ontology alignment. Mapping path will maintain the links between SIL and data source, and query implementation and user interface are applied to retrieve data and interact with end users. We described fully a practical ontology-driven framework for building SIL and introduced extensively relevant standards and techniques for implementing the framework. In the core part of ontology development, a dynamic multistrategies ontology alignment with automatic matcher selection and dynamic similarity aggregation is proposed. A demonstration case study in the scenario of mobile phone industry is used to illustrate the proposed framework.", "title": "" }, { "docid": "23ee528e0efe7c4fec7f8cda7e49a8dd", "text": "The development of reliability-based design criteria for surface ship structures needs to consider the following three components: (1) loads, (2) structural strength, and (3) methods of reliability analysis. A methodology for reliability-based design of ship structures is provided in this document. The methodology consists of the following two approaches: (1) direct reliabilitybased design, and (2) load and resistance factor design (LRFD) rules. According to this methodology, loads can be linearly or nonlinearly treated. Also in assessing structural strength, linear or nonlinear analysis can be used. The reliability assessment and reliability-based design can be performed at several levels of a structural system, such as at the hull-girder, grillage, panel, plate and detail levels. A rational treatment of uncertainty is suggested by considering all its types. Also, failure definitions can have significant effects on the assessed reliability, or resulting reliability-based designs. A method for defining and classifying failures at the system level is provided. The method considers the continuous nature of redundancy in ship structures. A bibliography is provided at the end of this document to facilitate future implementation of the methodology.", "title": "" }, { "docid": "a7089d7b076d2fb974e95985b20d5fa5", "text": "In this paper, we use a simple concept based on k-reverse nearest neighbor digraphs, to develop a framework RECORD for clustering and outlier detection. We developed three algorithms - (i) RECORD algorithm (requires one parameter), (ii) Agglomerative RECORD algorithm (no parameters required) and (iii) Stability-based RECORD algorithm (no parameters required). Our experimental results with published datasets, synthetic and real-life datasets show that RECORD not only handles noisy data, but also identifies the relevant clusters. Our results are as good as (if not better than) the results got from other algorithms.", "title": "" }, { "docid": "1600d4662fc5939c5f737756e2d3e823", "text": "Predicate encryption is a new paradigm for public-key encryption that generalizes identity-based encryption and more. In predicate encryption, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SK f corresponding to a predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known only for certain classes of predicates. We construct a scheme for predicates corresponding to the evaluation of inner products over ℤ N (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulas, thresholds, and more. Besides serving as a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right.", "title": "" }, { "docid": "6097315ac2e4475e8afd8919d390babf", "text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.", "title": "" }, { "docid": "0a80057b2c43648e668809e185a68fe6", "text": "A seminar that surveys state-of-the-art microprocessors offers an excellent forum for students to see how computer architecture techniques are employed in practice and for them to gain a detailed knowledge of the state of the art in microprocessor design. Princeton and the University of Virginia have developed such a seminar, organized around student presentations and a substantial research project. The course can accommodate a range of students, from advanced undergraduates to senior graduate students. The course can also be easily adapted to a survey of embedded processors. This paper describes the version taught at the University of Virginia and lessons learned from the experience.", "title": "" }, { "docid": "bdbd3d65c79e4f22d2e85ac4137ee67a", "text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.", "title": "" }, { "docid": "0c8d6441b5756d94cd4c3a0376f94fdc", "text": "Electronic word of mouth (eWOM) has been an important factor influencing consumer purchase decisions. Using the ABC model of attitude, this study proposes a model to explain how eWOM affects online discussion forums. Specifically, we propose that platform (Web site reputation and source credibility) and customer (obtaining buying-related information and social orientation through information) factors influence purchase intentions via perceived positive eWOM review credibility, as well as product and Web site attitudes in an online community context. A total of 353 online discussion forum users in an online community (Fashion Guide) in Taiwan were recruited, and structural equation modeling (SEM) was used to test the research hypotheses. The results indicate that Web site reputation, source credibility, obtaining buying-related information, and social orientation through information positively influence perceived positive eWOM review credibility. In turn, perceived positive eWOM review credibility directly influences purchase intentions and also indirectly influences purchase intentions via product and Web site attitudes. Finally, we discuss the theoretical and managerial implications of the findings.", "title": "" }, { "docid": "e5ecbd3728e93badd4cfbf5eef6957f9", "text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.", "title": "" }, { "docid": "697ee2a71640226ea71eadf0b9e1d3c8", "text": "Recently, product images have gained increasing attention in clothing recommendation since the visual appearance of clothing products has a significant impact on consumers’ decision. Most existing methods rely on conventional features to represent an image, such as the visual features extracted by convolutional neural networks (CNN features) and the scale-invariant feature transform algorithm (SIFT features), color histograms, and so on. Nevertheless, one important type of features, the aesthetic features, is seldom considered. It plays a vital role in clothing recommendation since a users’ decision depends largely on whether the clothing is in line with her aesthetics, however the conventional image features cannot portray this directly. To bridge this gap, we propose to introduce the aesthetic information, which is highly relevant with user preference, into clothing recommender systems. To achieve this, we first present the aesthetic features extracted by a pre-trained neural network, which is a brain-inspired deep structure trained for the aesthetic assessment task. Considering that the aesthetic preference varies significantly from user to user and by time, we then propose a new tensor factorization model to incorporate the aesthetic features in a personalized manner. We conduct extensive experiments on real-world datasets, which demonstrate that our approach can capture the aesthetic preference of users and significantly outperform several state-of-the-art recommendation methods.", "title": "" }, { "docid": "bffb9dcbc3d7687289fd44527154d81c", "text": "Mobile dating applications such as Coffee Meets Bagel, Tantan, and Tinder, have become significant for young adults to meet new friends and discover romantic relationships. From a system designer’s perspective, in order to achieve better user experience in these applications, we should take both the efficiency and fairness of a dating market into consideration, so as to increase the overall satisfaction for all users. Towards this goal, we investigate the nature of diminishing marginal returns for online dating markets (i.e., captured by the submodularity), and trade-off between the efficiency and fairness of the market with Nash social welfare. We further design effective online algorithms to the apps. We verify our models and algorithms through sound theoretical analysis and empirical studies by using real data and show that our algorithms can significantly improve the ecosystems of the online dating applications. ACM Reference Format: Yongzheng Jia, Xue Liu, and Wei Xu. 2018. When Online Dating Meets Nash Social Welfare: Achieving Efficiency and Fairness. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3178876.3186109", "title": "" }, { "docid": "02a276b26400fe37804298601b16bc13", "text": "Over the years, different meanings have been associated with the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred.\n In this article, we aim to fill the void in the literature by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength,” which we believe will be useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes.\n The scope of this article is restricted to non-transactional semantics, that is, those that apply to single storage object operations. As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.", "title": "" }, { "docid": "2939f23334ab58a1e1e3fed766ef205b", "text": "We demonstrate a dual-polarization slot antenna for indoor small-cell multiple-input-multiple-output (MIMO) systems. The symmetric structure and differential feeding promotes destructive interference of cross-polarized radiation in the far field to achieve high cross-polarization discrimination (XPD) in all directions. In addition, a very similar radiation pattern is observed not only between the major planes of each polarization but also between the two polarization orientations. Therefore, the proposed antenna can be considered as a stronger candidate for indoor small-cell MIMO systems. With an average XPD of 26.4 dB in all directions, the 3-D ray-tracing simulation results show a more than 22% increase in the system throughput compared to previous dual-polarization antennas for a single-user MIMO system in a typical indoor environment.", "title": "" }, { "docid": "b8470903a91c7d1acafeb813a507daed", "text": "The increasing usage of smart embedded devices in business blurs the line between the virtual and real worlds. This creates new opportunities to build applications that better integrate real-time state of the physical world, and hence, provides enterprise services that are highly dynamic, more diverse, and efficient. Service-Oriented Architecture (SOA) approaches traditionally used to couple functionality of heavyweight corporate IT systems, are becoming applicable to embedded real-world devices, i.e., objects of the physical world that feature embedded processing and communication. In such infrastructures, composed of large numbers of networked, resource-limited devices, the discovery of services and on-demand provisioning of missing functionality is a significant challenge. We propose a process and a suitable system architecture that enables developers and business process designers to dynamically query, select, and use running instances of real-world services (i.e., services running on physical devices) or even deploy new ones on-demand, all in the context of composite, real-world business applications.", "title": "" }, { "docid": "7593c8e9eb1520f65d7780efbbcedd7d", "text": "We show how to achieve better illumination estimates for color constancy by combining the results of several existing algorithms. We consider committee methods based on both linear and non–linear ways of combining the illumination estimates from the original set of color constancy algorithms. Committees of grayworld, white patch and neural net methods are tested. The committee results are always more accurate than the estimates of any of the other algorithms taken in isolation.", "title": "" }, { "docid": "66248db37a0dcf8cb17c075108b513b4", "text": "Since past few years there is tremendous advancement in electronic commerce technology, and the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper we present the necessary theory to detect fraud in credit card transaction processing using a Hidden Markov Model (HMM). An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected by using an enhancement to it(Hybrid model).In further sections we compare different methods for fraud detection and prove that why HMM is more preferred method than other methods.", "title": "" }, { "docid": "1a5f56c7c7a9d44a762ba94297f3ca7a", "text": "BACKGROUND\nFloods are the most common type of global natural disaster. Floods have a negative impact on mental health. Comprehensive evaluation and review of the literature are lacking.\n\n\nOBJECTIVE\nTo systematically map and review available scientific evidence on mental health impacts of floods caused by extended periods of heavy rain in river catchments.\n\n\nMETHODS\nWe performed a systematic mapping review of published scientific literature in five languages for mixed studies on floods and mental health. PUBMED and Web of Science were searched to identify all relevant articles from 1994 to May 2014 (no restrictions).\n\n\nRESULTS\nThe electronic search strategy identified 1331 potentially relevant papers. Finally, 83 papers met the inclusion criteria. Four broad areas are identified: i) the main mental health disorders-post-traumatic stress disorder, depression and anxiety; ii] the factors associated with mental health among those affected by floods; iii) the narratives associated with flooding, which focuses on the long-term impacts of flooding on mental health as a consequence of the secondary stressors; and iv) the management actions identified. The quantitative and qualitative studies have consistent findings. However, very few studies have used mixed methods to quantify the size of the mental health burden as well as exploration of in-depth narratives. Methodological limitations include control of potential confounders and short-term follow up.\n\n\nLIMITATIONS\nFloods following extreme events were excluded from our review.\n\n\nCONCLUSIONS\nAlthough the level of exposure to floods has been systematically associated with mental health problems, the paucity of longitudinal studies and lack of confounding controls precludes strong conclusions.\n\n\nIMPLICATIONS\nWe recommend that future research in this area include mixed-method studies that are purposefully designed, using more rigorous methods. Studies should also focus on vulnerable groups and include analyses of policy and practical responses.", "title": "" } ]
scidocsrr
9a0a277c612abdc977caff35ca0e4909
Automatically discovering local visual material attributes
[ { "docid": "b17fdc300edc22ab855d4c29588731b2", "text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.", "title": "" } ]
[ { "docid": "d42ed4f231d51cacaf1f42de1c723c31", "text": "A stepped circular waveguide dual-mode (SCWDM) filter is fully investigated in this paper, from its basic characteristic to design formula. As compared to a conventional circular waveguide dual-mode (CWDM) filter, it provides more freedoms for shifting and suppressing the spurious modes in a wide frequency band. This useful attribute can be used for a broadband waveguide contiguous output multiplexer (OMUX) in satellite payloads. The scaling factor for relating coupling value M to its corresponding impedance inverter K in a stepped cavity is derived for full-wave EM design. To validate the design technique, four design examples are presented. One challenging example is a wideband 17-channel Ku-band contiguous multiplexer with two SCWDM channel filters. A triplexer hardware covering the same included bandwidth is also designed and measured. The measurement results show excellent agreement with those of the theoretical EM designs, justifying the effectiveness of full-wave EM modal analysis. Comparing to the best possible design of conventional CWDM filters, at least 30% more spurious-free range in both Ku-band and C-band can be achieved by using SCWDM filters.", "title": "" }, { "docid": "b055b213e4f4b9ddf6822f0fc925d03d", "text": "We study a vehicle routing problem with soft time windows and stochastic travel times. In this problem, we consider stochastic travel times to obtain routes which are both efficient and reliable. In our problem setting, soft time windows allow early and late servicing at customers by incurring some penalty costs. The objective is to minimize the sum of transportation costs and service costs. Transportation costs result from three elements which are the total distance traveled, the number of vehicles used and the total expected overtime of the drivers. Service costs are incurred for early and late arrivals; these correspond to time-window violations at the customers. We apply a column generation procedure to solve this problem. The master problem can be modeled as a classical set partitioning problem. The pricing subproblem, for each vehicle, corresponds to an elementary shortest path problem with resource constraints. To generate an integer solution, we embed our column generation procedure within a branch-and-price method. Computational results obtained by experimenting with well-known problem instances are reported.", "title": "" }, { "docid": "f1e5f8ab0b2ce32553dd5e08f1113b36", "text": "We examined the hypothesis that an excess accumulation of intramuscular lipid (IMCL) is associated with insulin resistance and that this may be mediated by the oxidative capacity of muscle. Nine sedentary lean (L) and 11 obese (O) subjects, 8 obese subjects with type 2 diabetes mellitus (D), and 9 lean, exercise-trained (T) subjects volunteered for this study. Insulin sensitivity (M) determined during a hyperinsulinemic (40 mU x m(-2)min(-1)) euglycemic clamp was greater (P < 0.01) in L and T, compared with O and D (9.45 +/- 0.59 and 10.26 +/- 0.78 vs. 5.51 +/- 0.61 and 1.15 +/- 0.83 mg x min(-1)kg fat free mass(-1), respectively). IMCL in percutaneous vastus lateralis biopsy specimens by quantitative image analysis of Oil Red O staining was approximately 2-fold higher in D than in L (3.04 +/- 0.39 vs. 1.40 +/- 0.28% area as lipid; P < 0.01). IMCL was also higher in T (2.36 +/- 0.37), compared with L (P < 0.01). The oxidative capacity of muscle determined with succinate dehydrogenase staining of muscle fibers was higher in T, compared with L, O, and D (50.0 +/- 4.4, 36.1 +/- 4.4, 29.7 +/- 3.8, and 33.4 +/- 4.7 optical density units, respectively; P < 0.01). IMCL was negatively associated with M (r = -0.57, P < 0.05) when endurance-trained subjects were excluded from the analysis, and this association was independent of body mass index. However, the relationship between IMCL and M was not significant when trained individuals were included. There was a positive association between the oxidative capacity and M among nondiabetics (r = 0.37, P < 0.05). In summary, skeletal muscle of trained endurance athletes is markedly insulin sensitive and has a high oxidative capacity, despite having an elevated lipid content. In conclusion, the capacity for lipid oxidation may be an important mediator of the association between excess muscle lipid accumulation and insulin resistance.", "title": "" }, { "docid": "d4f28f36cb55cd2b01a85baeec4ea4a0", "text": "Reconstruction of complex auricular malformations is one of the longest surgical technique to master, because it requires an extremely detailed analysis of the anomaly and of the skin potential, as well as a to learn how to carve a complex 3D structure in costal cartilage. Small anomalies can be taken care of by any plastic surgeon, providing that he/she is aware of all the refinements of ear surgery. In this chapter, we analyze retrospectively 30 years of auricular reconstruction, ranging from small anomalies to microtia (2500 cases), excluding aesthetics variants such as prominent ears.", "title": "" }, { "docid": "041f01bfb8981683bd8bfae4991e098f", "text": "Audio description (AD) has become a cultural revolution for the visually impaired; however, the range of AD beneficiaries can be much broader. We claim that AD is useful for guiding children's attention. The paper presents an eye-tracking study testing the usefulness of AD in selective attention to described elements of a video scene. Forty-four children watched 2 clips from an educational animation series while their eye movements were recorded. Average fixation duration, fixation count, and saccade amplitude served as primary dependent variables. The results confirmed that AD guides children's attention towards described objects resulting e. g., in more fixations on specific regions of interest. We also evaluated eye movement patterns in terms of switching between focal and ambient processing. We postulate that audio description could complement regular teaching tools for guiding and focusing children's attention, especially when new concepts are introduced.", "title": "" }, { "docid": "7ae5b914b00e0791ccfaac122f9b1498", "text": "Frequency deviations of power systems caused by grid-connected wind power fluctuations is one of the key factors which restrains the increase of wind penetration level. This paper examines a combined wind and hybrid energy storage system (HESS, supercapacitor, and battery) to smooth wind power fluctuations. A fuzzy-based wind-HESS system (FWHS) controller is proposed to suppress the wind power fluctuations. The proposed controller takes full advantage of the complimentary characteristics of the supercapacitor and battery with the supercapacitor and battery in charge of high and middle frequency components of wind fluctuations, respectively. A differential evolution (DE)-based optimal sizing method for HESS systems is introduced to evaluate the minimum capacity of HESS as being limited by grid frequency deviation. The efficiency of the proposed scheme in the paper for wind-HESS system is evaluated by a real Chinese power system.", "title": "" }, { "docid": "3c4e1c7fd5dbdf5ea50eeed1afe23ff9", "text": "Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.", "title": "" }, { "docid": "5adb5e056b099c5ec2f8e91006d96615", "text": "BACKGROUND\nEmbodied conversational agents (ECAs) are computer-generated characters that simulate key properties of human face-to-face conversation, such as verbal and nonverbal behavior. In Internet-based eHealth interventions, ECAs may be used for the delivery of automated human support factors.\n\n\nOBJECTIVE\nWe aim to provide an overview of the technological and clinical possibilities, as well as the evidence base for ECA applications in clinical psychology, to inform health professionals about the activity in this field of research.\n\n\nMETHODS\nGiven the large variety of applied methodologies, types of applications, and scientific disciplines involved in ECA research, we conducted a systematic scoping review. Scoping reviews aim to map key concepts and types of evidence underlying an area of research, and answer less-specific questions than traditional systematic reviews. Systematic searches for ECA applications in the treatment of mood, anxiety, psychotic, autism spectrum, and substance use disorders were conducted in databases in the fields of psychology and computer science, as well as in interdisciplinary databases. Studies were included if they conveyed primary research findings on an ECA application that targeted one of the disorders. We mapped each study's background information, how the different disorders were addressed, how ECAs and users could interact with one another, methodological aspects, and the study's aims and outcomes.\n\n\nRESULTS\nThis study included N=54 publications (N=49 studies). More than half of the studies (n=26) focused on autism treatment, and ECAs were used most often for social skills training (n=23). Applications ranged from simple reinforcement of social behaviors through emotional expressions to sophisticated multimodal conversational systems. Most applications (n=43) were still in the development and piloting phase, that is, not yet ready for routine practice evaluation or application. Few studies conducted controlled research into clinical effects of ECAs, such as a reduction in symptom severity.\n\n\nCONCLUSIONS\nECAs for mental disorders are emerging. State-of-the-art techniques, involving, for example, communication through natural language or nonverbal behavior, are increasingly being considered and adopted for psychotherapeutic interventions in ECA research with promising results. However, evidence on their clinical application remains scarce. At present, their value to clinical practice lies mostly in the experimental determination of critical human support factors. In the context of using ECAs as an adjunct to existing interventions with the aim of supporting users, important questions remain with regard to the personalization of ECAs' interaction with users, and the optimal timing and manner of providing support. To increase the evidence base with regard to Internet interventions, we propose an additional focus on low-tech ECA solutions that can be rapidly developed, tested, and applied in routine practice.", "title": "" }, { "docid": "4b0cf6392d84a0cc8ab80c6ed4796853", "text": "This paper introduces the Finite-State TurnTaking Machine (FSTTM), a new model to control the turn-taking behavior of conversational agents. Based on a non-deterministic finite-state machine, the FSTTM uses a cost matrix and decision theoretic principles to select a turn-taking action at any time. We show how the model can be applied to the problem of end-of-turn detection. Evaluation results on a deployed spoken dialog system show that the FSTTM provides significantly higher responsiveness than previous approaches.", "title": "" }, { "docid": "0ffe59ea5705ae6d180cee8976bbffb4", "text": "We propose an analytical framework for studying parallel repetition, a basic product operation for one-round twoplayer games. In this framework, we consider a relaxation of the value of projection games. We show that this relaxation is multiplicative with respect to parallel repetition and that it provides a good approximation to the game value. Based on this relaxation, we prove the following improved parallel repetition bound: For every projection game G with value at most ρ, the k-fold parallel repetition G⊗k has value at most\n [EQUATION]\n This statement implies a parallel repetition bound for projection games with low value ρ. Previously, it was not known whether parallel repetition decreases the value of such games. This result allows us to show that approximating set cover to within factor (1 --- ε) ln n is NP-hard for every ε > 0, strengthening Feige's quasi-NP-hardness and also building on previous work by Moshkovitz and Raz.\n In this framework, we also show improved bounds for few parallel repetitions of projection games, showing that Raz's counterexample to strong parallel repetition is tight even for a small number of repetitions.\n Finally, we also give a short proof for the NP-hardness of label cover(1, Δ) for all Δ > 0, starting from the basic PCP theorem.", "title": "" }, { "docid": "4d04debb13948f73e959929dbf82e139", "text": "DynaMIT is a simulation-based real-time system designed to estimate the current state of a transportation network, predict future tra c conditions, and provide consistent and unbiased information to travelers. To perform these tasks, e cient simulators have been designed to explicitly capture the interactions between transportation demand and supply. The demand re ects both the OD ow patterns and the combination of all the individual decisions of travelers while the supply re ects the transportation network in terms of infrastructure, tra c ow and tra c control. This paper describes the design and speci cation of these simulators, and discusses their interactions. Massachusetts Institute of Technology, Dpt of Civil and Environmental Engineering, Cambridge, Ma. Email: mba@mit.edu Ecole Polytechnique F ed erale de Lausanne, Dpt. of Mathematics, CH-1015 Lausanne, Switzerland. Email: michel.bierlaire@ep .ch Volpe National Transportation Systems Center, Dpt of Transportation, Cambridge, Ma. Email: koutsopoulos@volpe.dot.gov The Ohio State University, Columbus, Oh. Email: mishalani.1@osu.edu", "title": "" }, { "docid": "a7e55ef23e9ea613da6a4664108ce4ce", "text": "Representing data in lower dimensional spaces has been used extensively in many disciplines such as natural language and image processing, data mining, and information retrieval. Recommender systems deal with challenging issues such as scalability, noise, and sparsity and thus, matrix and tensor factorization techniques appear as an interesting tool to be exploited. That is, we can deal with all aforementioned challenges by applying matrix and tensor decomposition methods (also known as factorization methods). In this chapter, we provide some basic definitions and preliminary concepts on dimensionality reduction methods of matrices and tensors. Gradient descent and alternating least squares methods are also discussed. Finally, we present the book outline and the goals of each chapter.", "title": "" }, { "docid": "d8a7ab2abff4c2e5bad845a334420fe6", "text": "Tone-mapping operators (TMOs) are designed to generate perceptually similar low-dynamic-range images from high-dynamic-range ones. We studied the performance of 15 TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment, and the setups were designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the question of which TMO is the best, KimKautz [\"Consistent tone reproduction,\" in Proceedings of Computer Graphics and Imaging (2008)] and Krawczyk [\"Lightness perception in tone reproduction for high dynamic range images,\" in Proceedings of Eurographics (2005), p. 3] obtained the better results across the different experiments. We conclude that more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments.", "title": "" }, { "docid": "c8dc06de68e4706525e98f444e9877e4", "text": "This study used two field trials with 5 and 34 years of liming histories, respectively, and aimed to elucidate the long-term effect of liming on soil organic C (SOC) in acid soils. It was hypothesized that long-term liming would increase SOC concentration, macro-aggregate stability and SOC concentration within aggregates. Surface soils (0–10 cm) were sampled and separated into four aggregate-size classes: large macro-aggregates (>2 mm), small macro-aggregates (0.25–2 mm), micro-aggregates (0.053–0.25 mm) and silt and clay fraction (<0.053 mm) by wet sieving, and the SOC concentration of each aggregate-size was quantified. Liming decreased SOC in the bulk soil and in aggregates as well as macro-aggregate stability in the low-input and cultivated 34-year-old trial. In contrast, liming did not significantly change the concentration of SOC in the bulk soil or in aggregates but improved macro-aggregate stability in the 5-year-old trial under undisturbed unimproved pastures. Furthermore, the single application of lime to the surface soil increased pH in both topsoil (0–10 cm) and subsurface soil (10–20 cm) and increased K2SO4-extractable C, microbial biomass C (Cmic) and basal respiration (CO2) in both soil layers of both lime trials. Liming increased the percentage of SOC present as microbial biomass C (Cmic/Corg) and decreased the respiration rate per unit biomass (qCO2). The study concludes that despite long-term liming decreased total SOC in the low-input systems, it increased labile C pools and the percentage of SOC present as microbial biomass C.", "title": "" }, { "docid": "30f480c126100a5ed425e7254534f77d", "text": "Using a wristband-type Photoplethymography (PPG) sensor, useful biomedical information such as heart rate and oxygen saturation can be acquired. Most of commercially-used wrist-type PPG sensors use green light reflections for its greater absorptivity of hemoglobin compared to other lights; this is important because wrists have comparably low concentration of blood flow. For reliable biomedical signal processing, we propose measurement sites for reflected red, green, infrared light PPG sensors on wrist. Amplitude, detection rate, and accuracy of heart rate are compared to determine the signal quality on measurement sites. Traditionally, wrist-type PPG sensors are implemented in measurement site 2, 3 or between 2 and 3 (between the distal Radius and the head of Ulna). Experiments show that all three reflected light PPG sensors generate good quality of PPG signals on measurement sites 4 and 11 (around the distal of Radius of left hand) in test subjects.", "title": "" }, { "docid": "bbf987eef74d76cf2916ae3080a2b174", "text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.", "title": "" }, { "docid": "5d7f5a6981824a257fe3868375f1d18f", "text": "This paper describes a mobile robotic assistant, developed to assist elderly individuals with mild cognitive and physical impairments, as well as support nurses in their daily activities. We present three software modules relevant to ensure successful human–robot interaction: an automated reminder system; a people tracking and detection system; and finally a high-level robot controller that performs planning under uncertainty by incorporating knowledge from low-level modules, and selecting appropriate courses of actions. During the course of experiments conducted in an assisted living facility, the robot successfully demonstrated that it could autonomously provide reminders and guidance for elderly residents. a a Purchase Export", "title": "" }, { "docid": "073ec1e3b8c6feab18f2ae53eab5cc24", "text": "Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.", "title": "" }, { "docid": "f90a4bbfbe4c6ea98457639a65dd84af", "text": "People in different cultures have strikingly different construals of the self, of others, and of the interdependence of the 2. These construals can influence, and in many cases determine, the very nature of individual experience, including cognition, emotion, and motivation. Many Asian cultures have distinct conceptions of individuality that insist on the fundamental relatedness of individuals to each other. The emphasis is on attending to others, fitting in, and harmonious interdependence with them. American culture neither assumes nor values such an overt connectedness among individuals. In contrast, individuals seek to maintain their independence from others by attending to the self and by discovering and expressing their unique inner attributes. As proposed herein, these construals are even more powerful than previously imagined. Theories of the self from both psychology and anthropology are integrated to define in detail the difference between a construal of the self as independent and a construal of the self as interdependent. Each of these divergent construals should have a set of specific consequences for cognition, emotion, and motivation; these consequences are proposed and relevant empirical literature is reviewed. Focusing on differences in self-construals enables apparently inconsistent empirical findings to be reconciled, and raises questions about what have been thought to be culture-free aspects of cognition, emotion, and motivation.", "title": "" }, { "docid": "b5238bfae025d46647526229dd5e00dd", "text": "Influences of discharge voltage on wheat seed vitality were investigated in a dielectric barrier discharge (DBD) plasma system at atmospheric pressure and temperature. Six different treatments were designed, and their discharge voltages were 0.0, 9.0, 11.0, 13.0, 15.0, and 17.0 kV, respectively. Fifty seeds were exposed to the DBD plasma atmosphere with an air flow rate of 1.5 L min-1 for 4 min in each treatment, and then the DBD plasma-treated seeds were prepared for germination in several Petri dishes. Each treatment was repeated three times. Germination indexes, growth indexes, surface topography, water uptake, permeability, and α-amylase activity were measured. DBD plasma treatment at appropriate energy levels had positive effects on wheat seed germination and seedling growth. The germination potential, germination index, and vigor index significantly increased by 31.4%, 13.9%, and 54.6% after DBD treatment at 11.0 kV, respectively, in comparison to the control. Shoot length, root length, dry weight, and fresh weight also significantly increased after the DBD plasma treatment. The seed coat was softened and cracks were observed, systematization of the protein was strengthened, and amount of free starch grain increased after the DBD plasma treatment. Water uptake, relative electroconductivity, soluble protein, and α-amylase activity of the wheat seed were also significantly improved after the DBD plasma treatment. Roles of active species and ultraviolet radiation generated in the DBD plasma process in wheat seed germination and seedling growth are proposed. Bioelectromagnetics. 39:120-131, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" } ]
scidocsrr
e41b8d88ebfa6e65f400dcccb15555dc
Labia Majora Augmentation with De-epithelialized Labial Rim (Minora) Flaps as an Auxiliary Procedure for Labia Minora Reduction
[ { "docid": "da93678f1b1070d68cfcbc9b7f6f88fe", "text": "Dermal fat grafts have been utilized in plastic surgery for both reconstructive and aesthetic purposes of the face, breast, and body. There are multiple reports in the literature on the male phallus augmentation with the use of dermal fat grafts. Few reports describe female genitalia aesthetic surgery, in particular rejuvenation of the labia majora. In this report we describe an indication and use of autologous dermal fat graft for labia majora augmentation in a patient with loss of tone and volume in the labia majora. We found that this procedure is an option for labia majora augmentation and provides a stable result in volume-restoration.", "title": "" }, { "docid": "20cc5c4aa870918f123e78490d5a5a73", "text": "The interest and demand for female genital rejuvenation surgery are steadily increasing. This report presents a concept of genital beautification consisting of labia minora reduction, labia majora augmentation by autologous fat transplantation, labial brightening by laser, mons pubis reduction by liposuction, and vaginal tightening if desired. Genital beautification was performed for 124 patients between May 2009 and January 2012 and followed up for 1 year to obtain data about satisfaction with the surgery. Of the 124 female patients included in the study, 118 (95.2 %) were happy and 4 (3.2 %) were very happy with their postoperative appearance. In terms of postoperative functionality, 84 patients (67.7 %) were happy and 40 (32.3 %) were very happy. Only 2 patients (1.6 %) were not satisfied with the aesthetic result of their genital beautification procedures, and 10 patients (8.1 %) experienced wound dehiscence. The described technique of genital beautification combines different aesthetic female genital surgery techniques. Like other aesthetic surgeries, these procedures are designed for the subjective improvement of the appearance and feelings of the patients. The effects of the operation are functional and psychological. They offer the opportunity for sexual stimulation and satisfaction. The complication rate is low. Superior aesthetic results and patient satisfaction can be achieved by applying this technique. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" } ]
[ { "docid": "349f53ceb63e415d2fb3e97410c0ef88", "text": "The current prominence and future promises of the Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano Things (IoNT) are extensively reviewed and a summary survey report is presented. The analysis clearly distinguishes between IoT and IoE which are wrongly considered to be the same by many people. Upon examining the current advancement in the fields of IoT, IoE and IoNT, the paper presents scenarios for the possible future expansion of their applications.", "title": "" }, { "docid": "eed45b473ebaad0740b793bda8345ef3", "text": "Plyometric training (PT) enhances soccer performance, particularly vertical jump. However, the effectiveness of PT depends on various factors. A systematic search of the research literature was conducted for randomized controlled trials (RCTs) studying the effects of PT on countermovement jump (CMJ) height in soccer players. Ten studies were obtained through manual and electronic journal searches (up to April 2017). Significant differences were observed when compared: (1) PT group vs. control group (ES=0.85; 95% CI 0.47-1.23; I2=68.71%; p<0.001), (2) male vs. female soccer players (Q=4.52; p=0.033), (3) amateur vs. high-level players (Q=6.56; p=0.010), (4) single session volume (<120 jumps vs. ≥120 jumps; Q=6.12, p=0.013), (5) rest between repetitions (5 s vs. 10 s vs. 15 s vs. 30 s; Q=19.10, p<0.001), (6) rest between sets (30 s vs. 60 s vs. 90 s vs. 120 s vs. 240 s; Q=19.83, p=0.001) and (7) and overall training volume (low: <1600 jumps vs. high: ≥1600 jumps; Q=5.08, p=0.024). PT is an effective form of training to improve vertical jump performance (i.e., CMJ) in soccer players. The benefits of PT on CMJ performance are greater for interventions of longer rest interval between repetitions (30 s) and sets (240 s) with higher volume of more than 120 jumps per session and 1600 jumps in total. Gender and competitive level differences should be considered when planning PT programs in soccer players.", "title": "" }, { "docid": "ad95d056922926bc69542c9bc36ac83a", "text": "E-mail has become the most popular Internet application and with its rise in use has come an inevitable increase in the use of e-mail for criminal purposes. It is possible for an e-mail message to be sent anonymously or through spoof ed servers. Computer forensics analysts need a tool that can be used to identify th e author of such e-mail messages. This thesis describes the development of such a tool using te chniques from the fields of stylometry and machine learning. An author’s style can be reduced to a pattern by making measurements of various stylometric feat ur s from the text. E-mail messages also contain macro-structural features that can b e measured. These features together can be used with the Support Vector Machine learnin g algorithm to classify or attribute authorship of e-mail messages to an author prov iding a suitable sample of messages is available for comparison. In an investigation, the set of authors may need to be reduced from an initial large list of possible suspects. This research has trialled autho rship characterisation based on sociolinguistic cohorts, such as gender and language bac kground, as a technique for profiling the anonymous message so that the suspect list can b e reduced.", "title": "" }, { "docid": "6b72358b5cbbe349ee09f88773762ab1", "text": "Estimating virtual CT(vCT) image from MRI data is in crucial need for medical application due to the relatively high dose of radiation exposure in CT scan and redundant workflow of both MR and CT. Among the existing work, the fully convolutional neural network(FCN) shows its superiority in generating vCT of high fidelity which merits further investigation. However, the most widely used evaluation metrics mean absolute error (MAE) and peak signal to noise ratio (PSNR) may not be adequate enough to reflect the structure quality of the vCT, while most of the current FCN based approaches focus more on the architectures but have little attention on the loss functions which are closely related to the final evaluation. The objective of this thesis is to apply Structure Similarity(SSIM) as loss function for predicting vCT from MRI based on FCN and see whether the prediction has improvement in terms of structure compared with conventionally used l or l loss. Inspired by the SSIM, the contextual l has been proposed to investigate the impact of introducing context information to the loss function. CT data was non-rigidly registered to MRI for training and evaluation. Patch-based 3D FCN were optimized for different loss functions to predict vCT from MRI data. Specifically for optimizing SSIM, the training data should be normalization to [0, 1] and architecture should be slightly changed by adding the ReLu layer before the output to guarantee the convexity of the SSIM during the training. The evaluation results are carried out with 7-folds cross validation of the 14 patients. MAE, PSNR and SSIM for the whole volume and tissue-wise are evaluted respectively. All optimizations successfully converged well and cl outperformed the other losses in terms of PSNR and MAE but with the worst SSIM, DSSIM works better at preserving the structures and resulting in smooth output. Yuan Zhou Delft, August 2017", "title": "" }, { "docid": "f400b94dd5f4d4210bd6873b44697e3a", "text": "A system for monitoring and forecasting urban air pollution is presented in this paper. The system uses low-cost air-quality monitoring motes that are equipped with an array of gaseous and meteorological sensors. These motes wirelessly communicate to an intelligent sensing platform that consists of several modules. The modules are responsible for receiving and storing the data, preprocessing and converting the data into useful information, forecasting the pollutants based on historical information, and finally presenting the acquired information through different channels, such as mobile application, Web portal, and short message service. The focus of this paper is on the monitoring system and its forecasting module. Three machine learning (ML) algorithms are investigated to build accurate forecasting models for one-step and multi-step ahead of concentrations of ground-level ozone (O3), nitrogen dioxide (NO2), and sulfur dioxide (SO2). These ML algorithms are support vector machines, M5P model trees, and artificial neural networks (ANN). Two types of modeling are pursued: 1) univariate and 2) multivariate. The performance evaluation measures used are prediction trend accuracy and root mean square error (RMSE). The results show that using different features in multivariate modeling with M5P algorithm yields the best forecasting performances. For example, using M5P, RMSE is at its lowest, reaching 31.4, when hydrogen sulfide (H2S) is used to predict SO2. Contrarily, the worst performance, i.e., RMSE of 62.4, for SO2 is when using ANN in univariate modeling. The outcome of this paper can be significantly useful for alarming applications in areas with high air pollution levels.", "title": "" }, { "docid": "59a1088003576f2e75cdbedc24ae8bdf", "text": "Two literatures or sets of articles are complementary if, considered together, they can reveal useful information of scientik interest not apparent in either of the two sets alone. Of particular interest are complementary literatures that are also mutually isolated and noninteractive (they do not cite each other and are not co-cited). In that case, the intriguing possibility akrae that thm &tfnrmnt;nn n&wd hv mwnhXno them 4. nnvnl Lyww u-c “‘1 YLL”I&.L.sU”4L 6uy’“s. u, b..S..“Y.Ayj .a.-** Y ..u. -... During the past decade, we have identified seven examples of complementary noninteractive structures in the biomedical literature. Each structure led to a novel, plausible, and testable hypothesis that, in several cases, was subsequently corroborated by medical researchers through clinical or laboratory investigation. We have also developed, tested, and described a systematic, computer-sided approach to iinding and identifying complementary noninteractive literatures. Specialization, Fragmentation, and a Connection Explosion By some obscure spontaneous process scientists have responded to the growth of science by organizing their work into soecialties, thus permitting each individual to -r-~ focus on a small part of the total literature. Specialties that grow too large tend to divide into subspecialties that have their own literatures which, by a process of repeated splitting, maintain more or less fixed and manageable size. As the total literature grows, the number of specialties, but not in general the size of each, increases (Kochen, 1963; Swanson, 199Oc). But the unintended consequence of specialization is fragmentation. By dividing up the pie, the potential relationships among its pieces tend to be neglected. Although scientific literature cannot, in the long run, grow disproportionately to the growth of the communities and resources that produce it, combinations of implicitlyrelated segments of literature can grow much faster than the literature itself and can readily exceed the capacity of the community to identify and assimilate such relatedness (Swanson, 1993). The signilicance of the “information explosion” thus may lie not in an explosion of quantity per se, but in an incalculably greater combinatorial explosion of unnoticed and unintended logical connections. The Significance of Complementary Noninteractive Literatures If two literatures each of substantial size are linked by arguments that they respectively put forward -that is, are “logically” related, or complementary -one would expect to gain usefui information by combining them. For example, suppose that one (biomedical) literature establishes that some environmental factor A influences certain internal physiological conditions and a second literature establishes that these same physiological changes influence the course of disease C. Presumably, then, anyone who reads both literatures could conclude that factor A might influence disease C. Under such --->!L---f -----l-----ry-?r-. ----.---,a ?1-_----_I rl-conamons or comptementdnty one woum dtso expect me two literatures to refer to each other. If, however, the two literatures were developed independently of one another, the logical l inkage illustrated may be both unintended and unnoticed. To detect such mutual isolation, we examine the citation pattern. If two literatures are “noninteractive” that ir if thmv hnvm n~.rer fnr odAnm\\ kppn &ml = ulyc 1U) a. “W, na6L.V ..Y.“. ,“a vva&“..n] “W.. UluIu together, and if neither cites the other, then it is possible that scientists have not previously considered both iiteratures together, and so it is possible that no one is aware of the implicit A-C connection. The two conditions, complementarily and noninteraction, describe a model structure that shows how useful information can remain undiscovered even though its components consist of public knowledge (Swanson, 1987,199l). Public Knowledge / Private Knowledge There is, of course, no way to know in any particular case whether the possibility of an AC relationship in the above model has or has not occurred to someone, or whether or not anyone has actually considered the two literatures on A and C together, a private matter that necessarily remains conjectural. However, our argument is based only on determining whether there is any printed evidence to the contrary. We are concerned with public rather than Data Mining: Integration Q Application 295 From: KDD-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. private knowledge -with the state of the record produced rather than the state of mind of the producers (Swanson, 1990d). The point of bringing together the AB and BC literatures, in any event, is not to \"prove\" an AC linkage, (by considering only transitive relationships) but rather call attention to an apparently unnoticed association that may be worth investigating. In principle any chain of scientific, including analogic, reasoning in which different links appear in noninteractive literatures may lead to the discovery of new interesting connections. \"What people know\" is a common u derstanding of what is meant by \"knowledge\". If taken in this subjective sense, the idea of \"knowledge discovery\" could mean merely that someone discovered something they hadn’t known before. Our focus in the present paper is on a second sense of the word \"knowledge\", a meaning associated with the products of human i tellectual activity, as encoded in the public record, rather than with the contents of the human mind. This abstract world of human-created \"objective\" knowledge is open to exploration and discovery, for it can contain territory that is subjectively unknown to anyone (Popper, 1972). Our work is directed toward the discovery of scientificallyuseful information implicit in the public record, but not previously made xplicit. The problem we address concerns structures within the scientific literature, not within the mind. The Process of Finding Complementary Noninteractive Literatures During the past ten years, we have pursued three goals: i) to show in principle how new knowledge might be gained by synthesizing logicallyrelated noninteractive literatures; ii) to demonstrate that such structures do exist, at least within the biomedical literature; and iii) to develop a systematic process for finding them. In pursuit of goal iii, we have created interactive software and database arch strategies that can facilitate the discovery of complementary st uctures in the published literature of science. The universe or searchspace under consideration is limited only by the coverage of the major scientific databases, though we have focused primarily on the biomedical field and the MEDLINE database (8 million records). In 1991, a systematic approach to finding complementary structures was outlined and became a point of departure for software development (Swanson, 1991). The system that has now taken shape is based on a 3-way interaction between computer software, bibliographic databases, and a human operator. Tae interaction generates information structtues that are used heuristically to guide the search for promising complementary literatures. The user of the system begins by choosing a question 296 Technology Spotlight or problem area of scientific interest that can be associated with a literature, C. Elsewhere we describe and evaluate experimental computer software, which we call ARROWSMITH (Swanson & Smalheiser, 1997), that performs two separate functions that can be used independently. The first function produces a list of candidates for a second literature, A, complementary o C, from which the user can select one candidate (at a time) an input, along with C, to the second function. This first function can be considered as a computer-assisted process of problem-discovery, an issue identified in the AI literature (Langley, et al., 1987; p304-307). Alternatively, the user may wish to identify a second literature, A, as a conjecture or hypothesis generated independently of the computer-produced list of candidates. Our approach has been based on the use of article titles as a guide to identifying complementary literatures. As indicated above, our point of departure for the second function is a tentative scientific hypothesis associated with two literalxtres, A and C. A title-word search of MEDLINE is used to create two local computer title-files associated with A and C, respectively. These files are used as input to the ARROWSMITH software, which then produces a list of all words common to the two sets of titles, except for words excluded by an extensive stoplist (presently about 5000 words). The resulting list of words provides the basis for identifying title-word pathways that might provide clues to the presence of complementary arguments within the literatures corresponding to A and C. The output of this procedure is a structured titledisplay (plus journal citation), that serves as a heuristic aid to identifying word-linked titles and serves also as an organized guide to the literature.", "title": "" }, { "docid": "b1c0fb9a020d8bc85b23f696586dd9d3", "text": "Most instances of real-life language use involve discourses in which several sentences or utterances are coherently linked through the use of repeated references. Repeated reference can take many forms, and the choice of referential form has been the focus of much research in several related fields. In this article we distinguish between three main approaches: one that addresses the ‘why’ question – why are certain forms used in certain contexts; one that addresses the ‘how’ question – how are different forms processed; and one that aims to answer both questions by seriously considering both the discourse function of referential expressions, and the cognitive mechanisms that underlie their processing cost. We argue that only the latter approach is capable of providing a complete view of referential processing, and that in so doing it may also answer a more profound ‘why’ question – why does language offer multiple referential forms. Coherent discourse typically involves repeated references to previously mentioned referents, and these references can be made with different forms. For example, a person mentioned in discourse can be referred to by a proper name (e.g., Bill), a definite description (e.g., the waiter), or a pronoun (e.g., he). When repeated reference is made to a referent that was mentioned in the same sentence, the choice and processing of referential form may be governed by syntactic constraints such as binding principles (Chomsky 1981). However, in many cases of repeated reference to a referent that was mentioned in the same sentence, and in all cases of repeated reference across sentences, the choice and processing of referential form reflects regular patterns and preferences rather than strong syntactic constraints. The present article focuses on the factors that underlie these patterns. Considerable research in several disciplines has aimed to explain how speakers and writers choose which form they should use to refer to objects and events in discourse, and how listeners and readers process different referential forms (e.g., Chafe 1976; Clark & Wilkes 1986; Kintsch 1988; Gernsbacher 1989; Ariel 1990; Gordon, Grosz & Gilliom 1993; Gundel, Hedberg & Zacharski 1993; Garrod & Sanford 1994; Gordon & Hendrick 1998; Almor 1999; Cowles & Garnham 2005). One of the central observations in this research is that there exists an inverse relation between the specificity of the referential", "title": "" }, { "docid": "6559d77de48d153153ce77b0e2969793", "text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.", "title": "" }, { "docid": "463c6bb86f81d0f0e19427772add1a22", "text": "Administrative burden represents the costs to businesses, citizens and the administration itself of complying with government regulations and procedures. The burden tends to increase with new forms of public governance that rely less on direct decisions and actions undertaken by traditional government bureaucracies, and more on government creating and regulating the environment for other, non-state actors to jointly address public needs. Based on the reviews of research and policy literature, this paper explores administrative burden as a policy problem, presents how Digital Government (DG) could be applied to address this problem, and identifies societal adoption, organizational readiness and other conditions under which DG can be an effective tool for Administrative Burden Reduction (ABR). Finally, the paper tracks ABR to the latest Contextualization stage in the DG evolution, and discusses possible development approaches and technological potential of pursuing ABR through DG.", "title": "" }, { "docid": "24a0f441ff09e7a60a1e22e2ca3f1194", "text": "As an important information portal, online healthcare forum are playing an increasingly crucial role in disseminating information and offering support to people. It connects people with the leading medical experts and others who have similar experiences. During an epidemic outbreak, such as H1N1, it is critical for the health department to understand how the public is responding to the ongoing pandemic, which has a great impact on the social stability. In this case, identifying influential users in the online healthcare forum and tracking the information spreading in such online community can be an effective way to understand the public reaction toward the disease. In this paper, we propose a framework to monitor and identify influential users from online healthcare forum. We first develop a mechanism to identify and construct social networks from the discussion board of an online healthcare forum. We propose the UserRank algorithm which combines link analysis and content analysis techniques to identify influential users. We have also conducted an experiment to evaluate our approach on the Swine Flu forum which is a sub-community of a popular online healthcare community, MedHelp (www.medhelp.org). Experimental results show that our technique outperforms PageRank, in-degree and out-degree centrality in identifying influential user from an online healthcare forum.", "title": "" }, { "docid": "e03b0d954f52e6880a7b37bc346ae471", "text": "1 I n t r o d u c t i o n Many induction algorithms construct models with unnecessary structure. These models contain components tha t do not improve accuracy, and tha t only reflect random variation in a single da ta sample. Such models are less efficient to store and use than their correctly-sized counterparts . Using these models requires the collection of unnecessary data. Portions of these models are wrong and mislead users. Finally, excess s tructure can reduce the accuracy of induced models on new data [8]. For induction algorithms tha t build decision trees [1, 7, 10], pruning is a common approach to remove excess structure. Pruning methods take an induced tree, examine individual subtrees, and remove those subtrees deemed unnecessary. Pruning methods differ primarily in the criterion used to judge subtrees. Many criteria have been proposed, including statistical significance tests [10], corrected error est imates [7], and minimum description length calculations [9]. In this paper, we bring together three threads of our research on excess structure and decision tree pruning. First, we show that several common methods for pruning decision trees still retain excess structure. Second, we explain this phenomenon in terms of statistical decision making with incorrect reference distributions. Third, we present a method tha t adjusts for incorrect reference distributions, and we present an experiment that evaluates the method. Our analysis indicates that many existing techniques for building decision trees fail to consider the statistical implications of examining many possible subtrees. We show how a simple adjustment can allow such systems to make valid statistical inferences in this specific situation. X. Liu, P. Cohen, M. Berthold (Eds.): \"Advances in Intelligent Data Analysis\" (IDA-97) LNCS 1280, pp. 211-222, 1997. 9 Springer-Verlag Berlin Heidelberg 1997 212 JENSEN, GATES, AND COHEN 2 O b s e r v i n g E x c e s s S t r u c t u r e Consider Figure 1, which shows a typical plot of tree size and accuracy as a function of training set size for the UCI a u s t r a l i a n dataset. 1 Moving from leftto-right in the graph corresponds to increasing the number of training instances available to the tree building process. On the left-hand side, no training instances are available and the best one can do with test instances is to assign them a class label at random. On the right-hand side, the entire dataset (excluding test instances) is available to the tree building process. C4.5 [7] and error-based pruning (the c4.5 default) are used to build and prune trees, respectively. Note that accuracy on this dataset stops increasing at a rather small training set size, thereafter remaining essentially constant. 2 Surprisingly, tree size continues to grow nearly linearly despite the use of error-based pruning. The graph clearly shows that unnecessary structure is retained, and more is retained as the size of the training set increases. Accuracy stops increasing after only 25% of the available training instances are seen. The tree at tha t point contains 22 nodes. When 100% of the available training instances are used in tree construction, the resulting tree contains 64 nodes. Despite a 3-fold increase in size over the tree built with 25% of the data, the accuracies of the two trees are statistically indistinguishable. Under a broad range of circumstances, there is a nearly linear relationship between training set size and tree size, even after accuracy has ceased to increase. The relationship between training set size and tree size was explored with 4 pruning methods and 19 datasets taken from the UCI repository. 3 The pruning methods are error-based (EBB the C4.5 default) [7], reduced error (REP) [8], minimum description length (MDL) [9], and cost-complexity with the lsE rule (ccP) [1]. The majority of extant pruning methods take one of four general approaches: deflating accuracy estimates based on the training set (e.g. EBP); pruning based on accuracy estimates from a pruning set (e.g. aEP); managing the tradeoff between accuracy and complexity (e.g. MDL); and creating a set of pruned trees based on different values of a pruning parameter and then selecting the appropriate parameter value using a pruning set or cross-validation (e.g. ccP). The pruning methods used in this paper were selected to be representative of these four approaches. Plots of tree size and accuracy as a function of training set size were generated for each combination of dataset and pruning algorithm as follows. Typically, 1 All datasets in this paper can be obtained from the University of California-Irvine (UCI) Machine Learning Repository. http ://ww~. its. uci. edu/ mlearn/MLRepository, html. 2 All reported accuracy figures in this paper are based on separate test sets, distinct from any data used for training. 3 The datasets are the same ones used in [4] with two exceptions. The crx dataset was omitted because it is roughly the same as the aus t r a l i aa dataset, and the horse-co l ic dataset was omitted because it was unclear which attribute was used as the class label. Note that the votel dataset was created by removing the physician-fee-freeze attribute from the vote dataset. BUILDING SIMPLE MODELS: A CASE STUDY WITH DECISION TREES 213", "title": "" }, { "docid": "90724c0dddf147d91a7562ef72666213", "text": "Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.1", "title": "" }, { "docid": "7744d8734f003ac98e9bc9e7972289d6", "text": "Printem film, a novel method for the fabrication of Printed Circuit Boards (PCBs) for small batch/prototyping use, is presented. Printem film enables a standard office inkjet or laser printer, using standard inks, to produce a PCB: the user prints a negative of the PCB onto the film, exposes it to UV or sunlight, and then tears-away the unneeded portion of the film, leaving-behind a copper PCB. PCBs produced with Printem film are as conductive as PCBs created using standard industrial methods. Herein, the composition of Printem film is described, and advantages of various materials discussed. Sample applications are also described, each of which demonstrates some unique advantage of Printem film over current prototyping methods: conductivity, flexibility, the ability to be cut with a pair of scissors, and the ability to be mounted to a rigid backplane.\n NOTE: publication of full-text held until November 9, 2015.", "title": "" }, { "docid": "82be11a0006f253a1cc3fd2ed85855c8", "text": "Knowledge base (KB) sharing among parties has been proven to be beneficial in several scenarios. However such sharing can arise considerable privacy concerns depending on the sensitivity of the information stored in each party's KB. In this paper, we focus on the problem of exporting a (part of a) KB of a party towards a receiving one. We introduce a novel solution that enables parties to export data in a privacy-preserving fashion, based on a probabilistic data structure, namely the \\emph{count-min sketch}. With this data structure, KBs can be exported in the form of key-value stores and inserted into a set of count-min sketches, where keys can be sensitive and values are counters. Count-min sketches can be tuned to achieve a given key collision probability, which enables a party to deny having certain keys in its own KB, and thus to preserve its privacy. We also introduce a metric, the γ-deniability (novel for count-min sketches), to measure the privacy level obtainable with a count-min sketch. Furthermore, since the value associated to a key can expose to linkage attacks, noise can be added to a count-min sketch to ensure controlled error on retrieved values. Key collisions and noise alter the values contained in the exported KB, and can affect negatively the accuracy of a computation performed on the exported KB. We explore the tradeoff between privacy preservation and computation accuracy by experimental evaluations in two scenarios related to malware detection.", "title": "" }, { "docid": "90acdc98c332de55e790d20d48dfde5e", "text": "PURPOSE AND DESIGN\nSnack and Relax® (S&R), a program providing healthy snacks and holistic relaxation modalities to hospital employees, was evaluated for immediate impact. A cross-sectional survey was then conducted to assess the professional quality of life (ProQOL) in registered nurses (RNs); compare S&R participants/nonparticipants on compassion satisfaction (CS), burnout, and secondary traumatic stress (STS); and identify situations in which RNs experienced compassion fatigue or burnout and the strategies used to address these situations.\n\n\nMETHOD\nPre- and post vital signs and self-reported stress were obtained from S&R attendees (N = 210). RNs completed the ProQOL Scale measuring CS, burnout, and STS (N = 158).\n\n\nFINDINGS\nSignificant decreases in self-reported stress, respirations, and heart rate were found immediately after S&R. Low CS was noted in 28.5% of participants, 25.3% had high burnout, and 23.4% had high STS. S&R participants and nonparticipants did not differ on any of the ProQOL scales. Situations in which participants experienced compassion fatigue/burnout were categorized as patient-related, work-related, and personal/family-related. Strategies to address these situations were holistic and stress reducing.\n\n\nCONCLUSION\nProviding holistic interventions such as S&R for nurses in the workplace may alleviate immediate feelings of stress and provide a moment of relaxation in the workday.", "title": "" }, { "docid": "cd1bf567e2e8bfbf460abb3ac1a0d4a5", "text": "Memory channel contention is a critical performance bottleneck in modern systems that have highly parallelized processing units operating on large data sets. The memory channel is contended not only by requests from different user applications (CPU access) but also by system requests for peripheral data (IO access), usually controlled by Direct Memory Access (DMA) engines. Our goal, in this work, is to improve system performance byeliminating memory channel contention between CPU accesses and IO accesses. To this end, we propose a hardware-software cooperative data transfer mechanism, Decoupled DMA (DDMA) that provides a specialized low-cost memory channel for IO accesses. In our DDMA design, main memoryhas two independent data channels, of which one is connected to the processor (CPU channel) and the other to the IO devices (IO channel), enabling CPU and IO accesses to be served on different channels. Systemsoftware or the compiler identifies which requests should be handled on the IO channel and communicates this to the DDMA engine, which then initiates the transfers on the IO channel. By doing so, our proposal increasesthe effective memory channel bandwidth, thereby either accelerating data transfers between system components, or providing opportunities to employ IO performance enhancement techniques (e.g., aggressive IO prefetching)without interfering with CPU accessesWe demonstrate the effectiveness of our DDMA framework in two scenarios: (i) CPU-GPU communication and (ii) in-memory communication (bulk datacopy/initialization within the main memory). By effectively decoupling accesses for CPU-GPU communication and in-memory communication from CPU accesses, our DDMA-based design achieves significant performanceimprovement across a wide variety of system configurations (e.g., 20% average performance improvement on a typical 2-channel 2-rank memory system).", "title": "" }, { "docid": "9b0114697dc6c260610d0badc1d7a2a4", "text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.", "title": "" }, { "docid": "d05dd1185643ced774fa0f4a1fbfe2cb", "text": "This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies why SVMs arc appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substantial improvements over the currently best performing methods and behave robustly over a variety of different learning tasks. Furthermore, they are fully automatic, eliminating the need for manual parameter tuning. 1 I n t r o d u c t i o n With the rapid growth of online information, text categorization has become one of the key techniques for handling and organizing text data. Text categorization techniques are used to classify news stories, to find interesting information on the WWW, and to guide a user's search through hypertext. Since building text classifiers by hand is difficult and time-consuming, it is advantageous to learn classifiers from examples. In this paper I will explore and identify the benefits of Support Vector Machines (SVMs) for text categorization. SVMs are a new learning method introduced by V. Vapnik et al. [9] [1]. They are well-founded in terms of computational learning theory and very open to theoretical understanding and analysis. After reviewing the standard feature vector representation of text, I will identify the particular properties of text in this representation in section 4. I will argue that SVMs are very well suited for learning in this setting. The empirical results in section 5 will support this claim. Compared to state-of-the-art methods, SVMs show substantial performance gains. Moreover, in contrast to conventional text classification methods SVMs will prove to be very robust, eliminating the need for expensive parameter tuning. 2 T e x t C a t e g o r i z a t i o n The goal of text categorization is the classification of documents into a fixed number of predefined categories. Each document can be in multiple, exactly one, or no category at all. Using machine learning, the objective is to learn classifiers", "title": "" }, { "docid": "2e384001b105d0b3ace839051cdddf88", "text": "Conformal prediction is a relatively new framework in which the predictive models output sets of predictions with a bound on the error rate, i.e., in a classification context, the probability of excluding the correct class label is lower than a predefined significance level. An investigation of the use of decision trees within the conformal prediction framework is presented, with the overall purpose to determine the effect of different algorithmic choices, including split criterion, pruning scheme and way to calculate the probability estimates. Since the error rate is bounded by the framework, the most important property of conformal predictors is efficiency, which concerns minimizing the number of elements in the output prediction sets. Results from one of the largest empirical investigations to date within the conformal prediction framework are presented, showing that in order to optimize efficiency, the decision trees should be induced using no pruning and with smoothed probability estimates. The choice of split criterion to use for the actual induction of the trees did not turn out to have any major impact on the efficiency. Finally, the experimentation also showed that when using decision trees, standard inductive conformal prediction was as efficient as the recently suggested method cross-conformal prediction. This is an encouraging results since cross-conformal prediction uses several decision trees, thus sacrificing the interpretability of a single decision tree.", "title": "" }, { "docid": "27fef13797e0535637468eb568183a08", "text": "This paper introduces a principled incremental view maintenance (IVM) mechanism for in-database computation described by rings. We exemplify our approach by introducing the covariance matrix ring that we use for learning linear regression models over arbitrary equi-join queries. Our approach is a higher-order IVM algorithm that exploits the factorized structure of joins and aggregates to avoid redundant computation and improve performance. We implemented it in DBToaster, which uses program synthesis to generate high-performance maintenance code. We experimentally show that it can outperform first-order and fully recursive higher-order IVM as well as recomputation by orders of magnitude while using less memory.", "title": "" } ]
scidocsrr
0321f1b5606fcda600774840194f0604
Application of inverse kinematics for skeleton manipulation in real-time
[ { "docid": "45df032a26dc7a27ed6f68cea5f7c033", "text": "Computer animation of articulated figures can be tedious, largely due to the amount of data which must be specified at each frame. Animation techniques range from simple interpolation between keyframed figure poses to higher-level algorithmic models of specific movement patterns. The former provides the animator with complete control over the movement, whereas the latter may provide only limited control via some high-level parameters incorporated into the model. Inverse kinematic techniques adopted from the robotics literature have the potential to relieve the animator of detailed specification of every motion parameter within a figure, while retaining complete control over the movement, if desired. This work investigates the use of inverse kinematics and simple geometric constraints as tools for the animator. Previous applications of inverse kinematic algorithms to conlputer animation are reviewed. A pair of alternative algorithms suitable for a direct manipulation interface are presented and qualitatively compared. Application of these algorithms to enforce simple geometric constraints on a figure during interactive manipulation is discussed. An implementation of one of these algorithms within an existing figure animation editor is described, which provides constrained inverse kinematic figure manipulation for the creation of keyframes.", "title": "" }, { "docid": "242030243133cd57d6cc62be154fd6ec", "text": "| The inverse kinematics of serial manipulators is a central problem in the automatic control of robot manipula-tors. The main interest has been in inverse kinematics of a six revolute (6R) jointed manipulator with arbitrary geometry. It has been recently shown that the joints of a general 6R manipulator can orient themselves in 16 diierent con-gurations (at most), for a given pose of the end{eeector. However, there are no good practical solutions available, which give a level of performance expected of industrial ma-nipulators. In this paper, we present an algorithm and implementation for eecient inverse kinematics for a general 6R manipulator. When stated mathematically, the problem reduces to solving a system of multivariate equations. We make use of the algebraic properties of the system and the symbolic formulation used for reducing the problem to solving a univariate polynomial. However, the polynomial is expressed as a matrix determinant and its roots are computed by reducing to an eigenvalue problem. The other roots of the multivariate system are obtained by computing eigenvectors and substitution. The algorithm involves symbolic preprocessing, matrix computations and a variety of other numerical techniques. The average running time of the algorithm, for most cases, is 11 milliseconds on an IBM RS/6000 workstation. This approach is applicable to inverse kinematics of all serial manipulators.", "title": "" } ]
[ { "docid": "5e261764696ebfb02196b0f9a6b7a4a6", "text": "When the cost of misclassifying a sample is high, it is useful to have an accurate estimate of uncertainty in the prediction for that sample. There are also multiple types of uncertainty which are best estimated in different ways, for example, uncertainty that is intrinsic to the training set may be well-handled by a Bayesian approach, while uncertainty introduced by shifts between training and query distributions may be better-addressed by density/support estimation. In this paper, we examine three types of uncertainty: model capacity uncertainty, intrinsic data uncertainty, and open set uncertainty, and review techniques that have been derived to address each one. We then introduce a unified hierarchical model, which combines methods from Bayesian inference, invertible latent density inference, and discriminative classification in a single end-to-end deep neural network topology to yield efficient per-sample uncertainty estimation. Our approach addresses all three uncertainty types and readily accommodates prior/base rates for binary detection.", "title": "" }, { "docid": "a4605974c90bc17edf715eb9edb10b8a", "text": "Natural language processing has been in existence for more than fifty years. During this time, it has significantly contributed to the field of human-computer interaction in terms of theoretical results and practical applications. As computers continue to become more affordable and accessible, the importance of user interfaces that are effective, robust, unobtrusive, and user-friendly – regardless of user expertise or impediments – becomes more pronounced. Since natural language usually provides for effortless and effective communication in human-human interaction, its significance and potential in human-computer interaction should not be overlooked – either spoken or typewritten, it may effectively complement other available modalities, such as windows, icons, and menus, and pointing; in some cases, such as in users with disabilities, natural language may even be the only applicable modality. This chapter examines the field of natural language processing as it relates to humancomputer interaction by focusing on its history, interactive application areas, theoretical approaches to linguistic modeling, and relevant computational and philosophical issues. It also presents a taxonomy for interactive natural language systems based on their linguistic knowledge and processing requirements, and reviews related applications. Finally, it discusses linguistic coverage issues, and explores the development of natural language widgets and their integration into multimodal user interfaces.", "title": "" }, { "docid": "cc5ede31b7dd9faa2cce9d2aa8819a3c", "text": "Despite considerable research on systems, algorithms and hardware to speed up deep learning workloads, there is no standard means of evaluating end-to-end deep learning performance. Existing benchmarks measure proxy metrics, such as time to process one minibatch of data, that do not indicate whether the system as a whole will produce a high-quality result. In this work, we introduce DAWNBench, a benchmark and competition focused on end-to-end training time to achieve a state-of-the-art accuracy level, as well as inference time with that accuracy. Using time to accuracy as a target metric, we explore how different optimizations, including choice of optimizer, stochastic depth, and multi-GPU training, affect end-to-end training performance. Our results demonstrate that optimizations can interact in non-trivial ways when used in conjunction, producing lower speed-ups and less accurate models. We believe DAWNBench will provide a useful, reproducible means of evaluating the many trade-offs in deep learning systems.", "title": "" }, { "docid": "feb06a9edd1d0f2608b13ac26a4f6704", "text": "In recent years, free space optical (FSO) communication has gained significant importance owing to its unique features: large bandwidth, license free spectrum, high data rate, easy and quick deployability, less power, and low mass requirements. FSO communication uses optical carrier in the near infrared band to establish either terrestrial links within the Earth’s atmosphere or inter-satellite/deep space links or ground-to-satellite/satellite-to-ground links. It also finds its applications in remote sensing, radio astronomy, military, disaster recovery, last mile access, backhaul for wireless cellular networks, and many more. However, despite of great potential of FSO communication, its performance is limited by the adverse effects (viz., absorption, scattering, and turbulence) of the atmospheric channel. Out of these three effects, the atmospheric turbulence is a major challenge that may lead to serious degradation in the bit error rate performance of the system and make the communication link infeasible. This paper presents a comprehensive survey on various challenges faced by FSO communication system for ground-to-satellite/satellite-to-ground and inter-satellite links. It also provides details of various performance mitigation techniques in order to have high link availability and reliability. The first part of this paper will focus on various types of impairments that pose a serious challenge to the performance of optical communication system for ground-to-satellite/satellite-to-ground and inter-satellite links. The latter part of this paper will provide the reader with an exhaustive review of various techniques both at physical layer as well as at the other layers (link, network, or transport layer) to combat the adverse effects of the atmosphere. It also uniquely presents a recently developed technique using orbital angular momentum for utilizing the high capacity advantage of optical carrier in case of space-based and near-Earth optical communication links. This survey provides the reader with comprehensive details on the use of space-based optical backhaul links in order to provide high capacity and low cost backhaul solutions.", "title": "" }, { "docid": "9ec10477ba242675c8bad3a1ca335b38", "text": "PURPOSE\nThis paper explores the importance of family daily routines and rituals for the family's functioning and sense of identity.\n\n\nMETHODS\nThe findings of this paper are derived from an analysis of the morning routines of 40 families with children with disabilities in the United States and Canada. The participants lived in urban and rural areas. Forty of the 49 participants were mothers and the majority of the families were of European descent. Between one and four interviews were conducted with each participant. Topics included the family's story, daily routines, and particular occupations. Data on the morning routines of the families were analyzed for order and affective and symbolic meaning using a narrative approach.\n\n\nFINDINGS\nThe findings are presented as narratives of morning activities in five families. These narratives are examples for rituals, routines, and the absence of a routine. Rituals are discussed in terms of their affective and symbolic qualities, routines are discussed in terms of the order they give to family life, whereas the lack of family routine is discussed in terms of lack of order in the family.\n\n\nCONCLUSIONS\nFamily routines and rituals are organizational and meaning systems that may affect family's ability to adapt them.", "title": "" }, { "docid": "1d234f8df57e0ee5354125e25d97b69b", "text": "FLUXNET is a global network of micrometeorological flux measurement sites that measure the exchanges of car­ bon dioxide, water vapor, and energy between the biosphere and atmosphere. At present over 140 sites are operating on a long-term and continuous basis. Vegetation under study includes temperate conifer and broadleaved (deciduous and evergreen) forests, tropical and boreal forests, crops, grasslands, chaparral, wetlands, and tundra. Sites exist on five con­ tinents and their latitudinal distribution ranges from 70°N to 30°S. FLUXNET has several primary functions. First, it provides infrastructure for compiling, archiving, and distributing carbon, water, and energy flux measurement, and meteorological, plant, and soil data to the science community. (Data and site information are available online at the FLUXNET Web site, http://www-eosdis.oml.gov/FLUXNET/.) Second, the project supports calibration and flux intercomparison activities. This activity ensures that data from the regional networks are intercomparable. And third, FLUXNET supports the synthesis, discussion, and communication of ideas and data by supporting project scientists, workshops, and visiting scientists. The overarching goal is to provide infor­ mation for validating computations of net primary productivity, evaporation, and energy absorption that are being generated by sensors mounted on the NASA Terra satellite. Data being compiled by FLUXNET are being used to quantify and compare magnitudes and dynamics of annual ecosystem carbon and water balances, to quantify the response of stand-scale carbon dioxide and water vapor flux densities to controlling biotic and abiotic factors, and to validate a hierarchy of soil-plant-atmosphere trace gas ex­ change models. Findings so far include 1) net CO ̂ exchange of temperate broadleaved forests increases by about 5.7 g C m“̂ day^ for each additional day that the growing season is extended; 2) the sensitivity of net ecosystem CO ̂ exchange to sunlight doubles if the sky is cloudy rather than clear; 3) the spectrum of CO ̂flux density exhibits peaks at timescales of days, weeks, and years, and a spectral gap exists at the month timescale; 4) the optimal temperature of net CO ̂exchange varies with mean summer temperature; and 5) stand age affects carbon dioxide and water vapor flux densities. ^ESPM, University o f California, Berkeley, Berkeley, California. *^Pflanzenokologie, Universitat Bayreuth, Bayreuth, Germany. Environm ental Science Division, Oak Ridge National Laboratory, Oak", "title": "" }, { "docid": "d337553027aa2d7464a5631a9b99c421", "text": "This paper presents a real-time vision framework that detects and tracks vehicles from stationary camera. It can be used to calculate statistical information such as average traffic speed and flow as well as in surveillance tasks. The framework consists of three main stages. Vehicles are first detected using Haar-like features. In the second phase, an adaptive appearance-based model is built to dynamically keep track of the detected vehicles. This model is also used in the third phase of data association to fuse the detection and tracking results. The use of detection results to update the tracker enhances the overall framework accuracy. The practical value of the proposed framework is demonstrated in real-life experiments where it is used to robustly compute vehicle counts within certain region of interest under variety of challenges.", "title": "" }, { "docid": "d87295095ef11648890b19cd0608d5da", "text": "Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links.", "title": "" }, { "docid": "d76980f3a0b4e0dab21583b75ee16318", "text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.", "title": "" }, { "docid": "08948dbae407fda2b1e9fb8c5231f796", "text": "Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.", "title": "" }, { "docid": "abb43256001147c813d12b89d2f9e67b", "text": "We study the distributed computing setting in which there are multiple servers, each holding a set of points, who wish to compute functions on the union of their point sets. A key task in this setting is Principal Component Analysis (PCA), in which the servers would like to compute a low dimensional subspace capturing as much of the variance of the union of their point sets as possible. Given a procedure for approximate PCA, one can use it to approximately solve problems such as k-means clustering and low rank approximation. The essential properties of an approximate distributed PCA algorithm are its communication cost and computational efficiency for a given desired accuracy in downstream applications. We give new algorithms and analyses for distributed PCA which lead to improved communication and computational costs for k-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality. Some of these techniques we develop, such as a general transformation from a constant success probability subspace embedding to a high success probability subspace embedding with a dimension and sparsity independent of the success probability, may be of independent interest.", "title": "" }, { "docid": "150f27f47e9ffd6cd4bc0756bd08aed4", "text": "Sunni extremism poses a significant danger to society, yet it is relatively easy for these extremist organizations to spread jihadist propaganda and recruit new members via the Internet, Darknet, and social media. The sheer volume of these sites make them very difficult to police. This paper discusses an approach that can assist with this problem, by automatically identifying a subset of web pages and social media content (or any text) that contains extremist content. The approach utilizes machine learning, specifically neural networks and deep learning, to classify text as containing “extremist” or “benign” (i.e., not extremist) content. This method is robust and can effectively learn to classify extremist multilingual text of varying length. This study also involved the construction of a high quality dataset for training and testing, put together by a team of 40 people (some with fluency in Arabic) who expended 9,500 hours of combined effort. This dataset should facilitate future research on this topic.", "title": "" }, { "docid": "75ed1ebdb7813935e355f5866f1c61a4", "text": "In this paper, we reconsider the clustering problem for image over-segmentation from a new perspective. We propose a novel search algorithm called “active search” which explicitly considers neighbor continuity. Based on this search method, we design a back-and-forth traversal strategy and a joint assignment and update step to speed up the algorithm. Compared to earlier methods, such as simple linear iterative clustering (SLIC) and its variants, which use fixed search regions and perform the assignment and the update steps separately, our novel scheme reduces the number of iterations required for convergence, and also provides better boundaries in the over-segmentation results. Extensive evaluation using the Berkeley segmentation benchmark verifies that our method outperforms competing methods under various evaluation metrics. In particular, our method is fastest, achieving approximately 30 fps for a 481 × 321 image on a single CPU core. To facilitate further research, our code is made publicly available.", "title": "" }, { "docid": "1d1caa539215e7051c25a9f28da48651", "text": "Physiological changes occur in pregnancy to nurture the developing foetus and prepare the mother for labour and delivery. Some of these changes influence normal biochemical values while others may mimic symptoms of medical disease. It is important to differentiate between normal physiological changes and disease pathology. This review highlights the important changes that take place during normal pregnancy.", "title": "" }, { "docid": "2708052c26111d54ba2c235afa26f71f", "text": "Reinforcement Learning (RL) has been an interesting research area in Machine Learning and AI. Hierarchical Reinforcement Learning (HRL) that decomposes the RL problem into sub-problems where solving each of which will be more powerful than solving the entire problem will be our concern in this paper. A review of the state-of-the-art of HRL has been investigated. Different HRL-based domains have been highlighted. Different problems in such different domains along with some proposed solutions have been addressed. It has been observed that HRL has not yet been surveyed in the current existing research; the reason that motivated us to work on this paper. Concluding remarks are presented. Some ideas have been emerged during the work on this research and have been proposed for pursuing a future research.", "title": "" }, { "docid": "8f1e3444c073a510df1594dc88d24b6b", "text": "Purpose – The purpose of this paper is to provide industrial managers with insight into the real-time progress of running processes. The authors formulated a periodic performance prediction algorithm for use in a proposed novel approach to real-time business process monitoring. Design/methodology/approach – In the course of process executions, the final performance is predicted probabilistically based on partial information. Imputation method is used to generate probable progresses of ongoing process and Support Vector Machine classifies the performances of them. These procedures are periodically iterated along with the real-time progress in order to describe the ongoing status. Findings – The proposed approach can describe the ongoing status as the probability that the process will be executed continually and terminated as the identical result. Furthermore, before the actual occurrence, a proactive warning can be provided for implicit notification of eventualities if the probability of occurrence of the given outcome exceeds the threshold. Research limitations/implications – The performance of the proactive warning strategy was evaluated only for accuracy and proactiveness. However, the process will be improved by additionally considering opportunity costs and benefits from actual termination types and their warning errors. Originality/value – Whereas the conventional monitoring approaches only classify the already occurred result of a terminated instance deterministically, the proposed approach predicts the possible results of an ongoing instance probabilistically over entire monitoring periods. As such, the proposed approach can provide the real-time indicator describing the current capability of ongoing process.", "title": "" }, { "docid": "845348dda35036869b1ecc12658d5603", "text": "Recent studies on human motor control have been largely innuenced by two important statements: (1) Sensory feedback is too slow to be involved at least in fast motor control actions; (2) Learned internal model of the systems plays an important role in motor control. As a result , the human motor control system is often described as open-loop and particularly as a system inverse. System inverse control is limited by too many problems to be a plausible candidate. Instead, an alternative between open-loop and feedback control is proposed here: the \"open-loop intermittent feedback optimal control\". In this scheme, a prediction of the future behaviour of the system, that requires feedback information and a system model, is used to determine a sequence of actions which is run open-loop. The prediction of a new control sequence is performed intermittently (due to computational demand and slow sensory feedback) but with a suucient frequency to ensure small control errors. The inverted pendulum on a cart is used to illustrate the viability of this scheme.", "title": "" }, { "docid": "02c8093183af96808a71b93ee3103996", "text": "The medical field stands to see significant benefits from the recent advances in deep learning. Knowing the uncertainty in the decision made by any machine learning algorithm is of utmost importance for medical practitioners. This study demonstrates the utility of using Bayesian LSTMs for classification of medical time series. Four medical time series datasets are used to show the accuracy improvement Bayesian LSTMs provide over standard LSTMs. Moreover, we show cherry-picked examples of confident and uncertain classifications of the medical time series. With simple modifications of the common practice for deep learning, significant improvements can be made for the medical practitioner and patient.", "title": "" }, { "docid": "85809b8e7811adb37314da2aaa28a70c", "text": "Underwater wireless sensor networks (UWSNs) will pave the way for a new era of underwater monitoring and actuation applications. The envisioned landscape of UWSN applications will help us learn more about our oceans, as well as about what lies beneath them. They are expected to change the current reality where no more than 5% of the volume of the oceans has been observed by humans. However, to enable large deployments of UWSNs, networking solutions toward efficient and reliable underwater data collection need to be investigated and proposed. In this context, the use of topology control algorithms for a suitable, autonomous, and on-the-fly organization of the UWSN topology might mitigate the undesired effects of underwater wireless communications and consequently improve the performance of networking services and protocols designed for UWSNs. This article presents and discusses the intrinsic properties, potentials, and current research challenges of topology control in underwater sensor networks. We propose to classify topology control algorithms based on the principal methodology used to change the network topology. They can be categorized in three major groups: power control, wireless interface mode management, and mobility assisted–based techniques. Using the proposed classification, we survey the current state of the art and present an in-depth discussion of topology control solutions designed for UWSNs.", "title": "" } ]
scidocsrr
eff6837a68e0302edf34b2584277b108
Cost-Sensitive Semi-Supervised Support Vector Machine
[ { "docid": "125655821a44bbce2646157c8465e345", "text": "Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VM algorithms is studied together, under a common experimental setting.", "title": "" }, { "docid": "9bb4934d1d7b5ea27b251e4153b4fbc7", "text": "In the usual setting of Machine Learning, classifiers are typically evaluated by estimating their error rate (or equi valently, the classification accuracy) on the test data. However, this mak es sense only if all errors have equal (uniform) costs. When the costs of errors differ between each other, the classifiers should be eva luated by comparing the total costs of the errors. Classifiers are typically designed to minimize the number of errors (incorrect classifications) made. When misclassification c sts vary between classes, this approach is not suitable. In this case the total misclassification cost should be minimized. In Machine Learning, only little work for dealing with nonuniform misclassification costs has been done. This paper pr esents a few different approaches for cost-sensitive modifications f the backpropagation learning algorithm for multilayered feedforw a d neural networks . The described approaches are thoroughly tested a nd evaluated on several standard benchmark domains.", "title": "" }, { "docid": "04ba17b4fc6b506ee236ba501d6cb0cf", "text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.", "title": "" } ]
[ { "docid": "f130271897998e0e43ed2d530ed81ebe", "text": "Provable Data Possession (PDP) enables cloud users to verify the integrity of their outsourced data without retrieving the entire file from cloud servers. At present, to execute data checking, many PDP schemes is delegated to some proxy to implement remote data possession checking task. Because the proxy may store some state information in cloud storage servers, it makes that many PDP scheme are insecure. To solve this problem, Ren et al. proposed an mutual verifiable provable data possession scheme and claimed that their scheme is secure. Unfortunately, in this work, we show that their scheme is insecure. It exists forgery attack and replay attack. After giving the corresponding attacks, we give an improved scheme to overcome the above flaws. By analyzing, we show that our improved PDP scheme is secure under the ChosenTarget-CDH problem and the CDH problem.", "title": "" }, { "docid": "bb03f7d799b101966b4ea6e75cd17fea", "text": "Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.", "title": "" }, { "docid": "df331d60ab6560808e28e3813766b67b", "text": "Analyzing large graphs provides valuable insights for social networking and web companies in content ranking and recommendations. While numerous graph processing systems have been developed and evaluated on available benchmark graphs of up to 6.6B edges, they often face significant difficulties in scaling to much larger graphs. Industry graphs can be two orders of magnitude larger hundreds of billions or up to one trillion edges. In addition to scalability challenges, real world applications often require much more complex graph processing workflows than previously evaluated. In this paper, we describe the usability, performance, and scalability improvements we made to Apache Giraph, an open-source graph processing system, in order to use it on Facebook-scale graphs of up to one trillion edges. We also describe several key extensions to the original Pregel model that make it possible to develop a broader range of production graph applications and workflows as well as improve code reuse. Finally, we report on real-world operations as well as performance characteristics of several large-scale production applications.", "title": "" }, { "docid": "2d6c47bbbbf1abf36cf0218515bf22dc", "text": "Object detection is the identification of an object in the image along with its localization and classification. It has wide spread applications and is a critical component for vision based software systems. This paper seeks to perform a rigorous survey of modern object detection algorithms that use deep learning. As part of the survey, the topics explored include various algorithms, quality metrics, speed/size trade offs and training methodologies. This paper focuses on the two types of object detection algorithmsthe SSD class of single step detectors and the Faster R-CNN class of two step detectors. Techniques to construct detectors that are portable and fast on low powered devices are also addressed by exploring new lightweight convolutional base architectures. Ultimately, a rigorous review of the strengths and weaknesses of each detector leads us to the present state of the art.", "title": "" }, { "docid": "341e0b7d04b333376674dac3c0888f50", "text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.", "title": "" }, { "docid": "71c6c714535ae1bfd749cbb8bbb34f5e", "text": "This paper tackles the problem of relative pose estimation between two monocular camera images in textureless scenes. Due to a lack of point matches, point-based approaches such as the 5-point algorithm often fail when used in these scenarios. Therefore we investigate relative pose estimation from line observations. We propose a new approach in which the relative pose estimation from lines is extended by a 3D line direction estimation step. The estimated line directions serve to improve the robustness and the efficiency of all processing phases: they enable us to guide the matching of line features and allow an efficient calculation of the relative pose. First, we describe in detail the novel 3D line direction estimation from a single image by clustering of parallel lines in the world. Secondly, we propose an innovative guided matching in which only clusters of lines with corresponding 3D line directions are considered. Thirdly, we introduce the new relative pose estimation based on 3D line directions. Finally, we combine all steps to a visual odometry system. We evaluate the different steps on synthetic and real sequences and demonstrate that in the targeted scenarios we outperform the state-of-the-art in both accuracy and computation time.", "title": "" }, { "docid": "c625221e79bdc508c7c772f5be0458a1", "text": "Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning contextbased word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).", "title": "" }, { "docid": "b08e85bd5c36f8d99725db6e8c227158", "text": "The Non-Conventional sources such as solar energy has been replacement and best exploited electric source. The solar electric power required DC-DC converter for production, controllable and regulation of variable solar electric energy. The single ended boost converter has been replaced by SEPIC converter to overcome the problem associated with DC-DC converter. The problem associated with DC converter such as high amount of ripple, create harmonics, invert the voltage, create overheating and effective efficiency can be minimized and achieved best efficiency by SEPIC converters. This paper has been focused on design, comparison of DC-DC solar system with the SEPIC converter as using closed loop feedback control. In comparison DC-DC converter to SEPIC converter, it has highly efficient more than 1–5 %.", "title": "" }, { "docid": "a0f20c2481aefc3b431f708ade0cc1aa", "text": "Objective Video game violence has become a highly politicized issue for scientists and the general public. There is continuing concern that playing violent video games may increase the risk of aggression in players. Less often discussed is the possibility that playing violent video games may promote certain positive developments, particularly related to visuospatial cognition. The objective of the current article was to conduct a meta-analytic review of studies that examine the impact of violent video games on both aggressive behavior and visuospatial cognition in order to understand the full impact of such games. Methods A detailed literature search was used to identify peer-reviewed articles addressing violent video game effects. Effect sizes r (a common measure of effect size based on the correlational coefficient) were calculated for all included studies. Effect sizes were adjusted for observed publication bias. Results Results indicated that publication bias was a problem for studies of both aggressive behavior and visuospatial cognition. Once corrected for publication bias, studies of video game violence provided no support for the hypothesis that violent video game playing is associated with higher aggression. However playing violent video games remained related to higher visuospatial cognition (r x = 0.36). Conclusions Results from the current analysis did not support the conclusion that violent video game playing leads to aggressive behavior. However, violent video game playing was associated with higher visuospatial cognition. It may be advisable to reframe the violent video game debate in reference to potential costs and benefits of this medium.", "title": "" }, { "docid": "8b3962dc5895a46c913816f208aa8e60", "text": "Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational decision support systems for the early detection of glaucoma can help prevent this complication. The retinal optic nerve fiber layer can be assessed using optical coherence tomography, scanning laser polarimetry, and Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma detection using a combination of texture and higher order spectra (HOS) features from digital fundus images. Support vector machine, sequential minimal optimization, naive Bayesian, and random-forest classifiers are used to perform supervised classification. Our results demonstrate that the texture and HOS features after z-score normalization and feature selection, and when combined with a random-forest classifier, performs better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than 91%. The impact of feature ranking and normalization is also studied to improve results. Our proposed novel features are clinically significant and can be used to detect glaucoma accurately.", "title": "" }, { "docid": "803b8189288dc07411c4e9e48dcac9b2", "text": "Machine-to-machine (M2M) communication is becoming an increasingly important part of mobile traffic and thus also a topic of major interest for mobile communication research and telecommunication standardization bodies. M2M communication offers various ubiquitous services and is one of the main enablers of the vision inspired by the Internet of Things (IoT). The concept of mobile M2M communication has emerged due to the wide range, coverage provisioning, high reliability, and decreasing costs of future mobile networks. Nevertheless, M2M communications pose significant challenges to mobile networks, e.g., due to the expected large number of devices with simultaneous access for sending small-sized data, and a diverse application range. This paper provides a detailed survey of M2M communications in the context of mobile networks, and thus focuses on the latest Long-Term Evolution-Advanced (LTE-A) networks. Moreover, the end-to-end network architectures and reference models for M2M communication are presented. Furthermore, a comprehensive survey is given to M2M service requirements, major current standardization efforts, and upcoming M2M-related challenges. In addition, an overview of upcoming M2M services expected in 5G networks is presented. In the end, various mobile M2M applications are discussed followed by open research questions and directions.", "title": "" }, { "docid": "7b28877bcda4c0fa0f89eadd7146e173", "text": "REST architectural style gains increasing popularity in the networking protocol design, and it has become a prevalent choice for northbound API of Software-Defined Networking (SDN). This paper addresses many critical issues in RESTful networking protocol design, and presents a framework on how a networking protocol can be designed in a truly RESTful manner, making it towards a service oriented data networking. In particular, we introduce the HTTP content negotiation mechanism which allows clients to select different representation formats from the same resource URI. Most importantly, we present a hypertext-driven approach, so that hypertext links are defined between REST resources for the networking protocol to guide clients to identify the right resources rather than relying on fixed resource URIs. The advantages of our approach are verified in two folds. First, we show how to apply our approach to fix REST design problems in some existing northbound networking APIs, and then we show how to design a RESTful northbound API of SDN in the context of OpenStack. We implemented our proposed approach in the northbound REST API of SOX, a generalized SDN controller, and the benefits of the proposed approach are experimentally verified.", "title": "" }, { "docid": "6c983878e3a50fa9b1b08c756735f588", "text": "BACKGROUND\nEngagement in online programs is difficult to maintain. Gamification is the recent trend that offers to increase engagement through the inclusion of game-like features like points and badges, in non-game contexts. This review will answer the following question, 'Are gamification strategies effective in increasing engagement in online programs?'\n\n\nMETHOD\nEight databases (Web of Science, PsycINFO, Medline, INSPEC, ERIC, Cochrane Library, Business Source Complete and ACM Digital Library) were searched from 2010 to the 28th of October 2015 using a comprehensive search strategy. Eligibility criteria was based on the PICOS format, where \"population\" included adults, \"intervention\" involved an online program or smart phone application that included at least one gamification feature. \"Comparator\" was a control group, \"outcomes\" included engagement and \"downstream\" outcomes which occurred as a result of engagement; and \"study design\" included experimental studies from peer-reviewed sources. Effect sizes (Cohens d and 95% confidence intervals) were also calculated.\n\n\nRESULTS\n1017 studies were identified from database searches following the removal of duplicates, of which 15 met the inclusion criteria. The studies involved a total of 10,499 participants, and were commonly undertaken in tertiary education contexts. Engagement metrics included time spent (n = 5), volume of contributions (n = 11) and occasions visited to the software (n = 4); as well as downstream behaviours such as performance (n = 4) and healthy behaviours (n = 1). Effect sizes typically ranged from medium to large in direct engagement and downstream behaviours, with 12 out of 15 studies finding positive significant effects in favour of gamification.\n\n\nCONCLUSION\nGamification is effective in increasing engagement in online programs. Key recommendations for future research into gamification are provided. In particular, rigorous study designs are required to fully examine gamification's effects and determine how to best achieve sustained engagement.", "title": "" }, { "docid": "f409eace05cd617355440509da50d685", "text": "Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.", "title": "" }, { "docid": "4681e8f07225e305adfc66cd1b48deb8", "text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.", "title": "" }, { "docid": "752cf1c7cefa870c01053d87ff4f445c", "text": "Cannabidiol (CBD) represents a new promising drug due to a wide spectrum of pharmacological actions. In order to relate CBD clinical efficacy to its pharmacological mechanisms of action, we performed a bibliographic search on PUBMED about all clinical studies investigating the use of CBD as a treatment of psychiatric symptoms. Findings to date suggest that (a) CBD may exert antipsychotic effects in schizophrenia mainly through facilitation of endocannabinoid signalling and cannabinoid receptor type 1 antagonism; (b) CBD administration may exhibit acute anxiolytic effects in patients with generalised social anxiety disorder through modification of cerebral blood flow in specific brain sites and serotonin 1A receptor agonism; (c) CBD may reduce withdrawal symptoms and cannabis/tobacco dependence through modulation of endocannabinoid, serotoninergic and glutamatergic systems; (d) the preclinical pro-cognitive effects of CBD still lack significant results in psychiatric disorders. In conclusion, current evidences suggest that CBD has the ability to reduce psychotic, anxiety and withdrawal symptoms by means of several hypothesised pharmacological properties. However, further studies should include larger randomised controlled samples and investigate the impact of CBD on biological measures in order to correlate CBD's clinical effects to potential modifications of neurotransmitters signalling and structural and functional cerebral changes.", "title": "" }, { "docid": "a049645d97654a366212fcb0d3772e28", "text": "The post-adoption behaviors of online service users are critical performance factors for online service providers. To fill an academic gap that persists regarding bloggers’ switching behavior across online service substitutes, this empirical study investigates which factors affect bloggers who switch social network sites, in an attempt to understand specifically how push, pull, and mooring factors shape their switching intentions. The data to test the hypotheses come from an online survey of 319 bloggers, analyzed using partial least squares techniques. The results confirm positive influences of push and pull effects, a negative influence of mooring effects, and an interactive effect of push and mooring on switching intentions. The push–pull–mooring framework thus is a useful tool for comprehending the competing forces that influence the use of online service substitutes. In particular, perceptions of weak connections and writing anxiety push bloggers away, whereas relative enjoyment and usefulness pull bloggers to social network sites; switching cost and past experience also inhibit a change. These findings offer key insights and implications for the competitive strategy choices of online service providers. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ff7cce658de6150af85e95b25fd8e508", "text": "Using a panel of mandatory SEC disclosure filings we test the predictability of investment fraud. We find that past regulatory and legal violations, conflicts of interest, and monitoring, are significantly associated with future fraud. Avoiding the 5% of firms with the highest fraud risk allows investors to avoid 29.7% of investment frauds, and over half of the total dollar losses from fraud. Even after excluding small frauds and fraud by rogue employees, we are able to predict at least 24.1% of frauds at a false positive rate of 5%. There is no evidence that investors are compensated for fraud risk through superior performance or lower fees. We also find that investors react strongly to the discovery of fraud, resulting in significantly higher rates of firm death and investor outflows. Our results provide investors and regulators with tools for predicting investment fraud. JEL Classifications: G2, G20, G28, K2, K22", "title": "" }, { "docid": "6539dddc2fe95b6d542d1654749af7eb", "text": "Botnets are the preeminent source of online crime and arguably the greatest threat to the Internet infrastructure. In this paper, we present ZombieCoin, a botnet command-and-control (C&C) mechanism that runs on the Bitcoin network. ZombieCoin offers considerable advantages over existing C&C techniques, most notably the fact that Bitcoin is designed to resist the very regulatory processes currently used to combat botnets. We believe this is a desirable avenue botmasters may explore in the near future and our work is intended as a first step towards devising effective countermeasures.", "title": "" }, { "docid": "76375aa50ebe8388d653241ba481ecd2", "text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.", "title": "" } ]
scidocsrr
a8937b38a0c5056ea59c872531a9238d
Preprocessing DNS Log Data for Effective Data Mining
[ { "docid": "d4f1cdfe13fda841edfb31ced34a4ee8", "text": "ÐMissing data are often encountered in data sets used to construct effort prediction models. Thus far, the common practice has been to ignore observations with missing data. This may result in biased prediction models. In this paper, we evaluate four missing data techniques (MDTs) in the context of software cost modeling: listwise deletion (LD), mean imputation (MI), similar response pattern imputation (SRPI), and full information maximum likelihood (FIML). We apply the MDTs to an ERP data set, and thereafter construct regression-based prediction models using the resulting data sets. The evaluation suggests that only FIML is appropriate when the data are not missing completely at random (MCAR). Unlike FIML, prediction models constructed on LD, MI and SRPI data sets will be biased unless the data are MCAR. Furthermore, compared to LD, MI and SRPI seem appropriate only if the resulting LD data set is too small to enable the construction of a meaningful regression-based prediction model.", "title": "" } ]
[ { "docid": "8533b47323e9de6fb24e88a49c3e52fa", "text": "An ontology is a set of deenitions of content-speciic knowledge representation prim-itives: classes, relations, functions, and object constants. Ontolingua is mechanism for writing ontologies in a canonical format, such that they can be easily translated into a variety of representation and reasoning systems. This allows one to maintain the ontol-ogy in a single, machine-readable form while using it in systems with diierent syntax and reasoning capabilities. The syntax and semantics are based on the KIF knowledge interchange format 11]. Ontolingua extends KIF with standard primitives for deening classes and relations, and organizing knowledge in object-centered hierarchies with inheritance. The Ontolingua software provides an architecture for translating from KIF-level sentences into forms that can be eeciently stored and reasoned about by target representation systems. Currently, there are translators into LOOM, Epikit, and Algernon, as well as a canonical form of KIF. This paper describes the basic approach of Ontolingua to the ontology sharing problem, introduces the syntax, and describes the semantics of a few ontological commitments made in the software. Those commitments, which are reeected in the on-tolingua syntax and the primitive vocabulary of the frame ontology, include: a distinction between deenitional and nondeenitional assertions; the organization of knowledge with classes, instances, sets, and second-order relations; and assertions whose meaning depends on the contents of the knowledge base. Limitations of Ontolingua's \\conser-vative\" approach to sharing ontologies and alternative approaches to the problem are discussed.", "title": "" }, { "docid": "42676d245f38cade2abefa1df09423ed", "text": "BACKGROUND\nComprehensive literature reviews of historical perspectives and evidence supporting cannabis/cannabinoids in the treatment of pain, including migraine and headache, with associated neurobiological mechanisms of pain modulation have been well described. Most of the existing literature reports on the cannabinoids Δ9 -tetrahydrocannabinol (THC) and cannabidiol (CBD), or cannabis in general. There are many cannabis strains that vary widely in the composition of cannabinoids, terpenes, flavonoids, and other compounds. These components work synergistically to produce wide variations in benefits, side effects, and strain characteristics. Knowledge of the individual medicinal properties of the cannabinoids, terpenes, and flavonoids is necessary to cross-breed strains to obtain optimal standardized synergistic compositions. This will enable targeting individual symptoms and/or diseases, including migraine, headache, and pain.\n\n\nOBJECTIVE\nReview the medical literature for the use of cannabis/cannabinoids in the treatment of migraine, headache, facial pain, and other chronic pain syndromes, and for supporting evidence of a potential role in combatting the opioid epidemic. Review the medical literature involving major and minor cannabinoids, primary and secondary terpenes, and flavonoids that underlie the synergistic entourage effects of cannabis. Summarize the individual medicinal benefits of these substances, including analgesic and anti-inflammatory properties.\n\n\nCONCLUSION\nThere is accumulating evidence for various therapeutic benefits of cannabis/cannabinoids, especially in the treatment of pain, which may also apply to the treatment of migraine and headache. There is also supporting evidence that cannabis may assist in opioid detoxification and weaning, thus making it a potential weapon in battling the opioid epidemic. Cannabis science is a rapidly evolving medical sector and industry with increasingly regulated production standards. Further research is anticipated to optimize breeding of strain-specific synergistic ratios of cannabinoids, terpenes, and other phytochemicals for predictable user effects, characteristics, and improved symptom and disease-targeted therapies.", "title": "" }, { "docid": "55ca84497c465c236b309adc597fe3ad", "text": "BACKGROUND\nSelf-myofascial release (SMFR) is a type of myofascial release performed by the individual themselves rather than by a clinician, typically using a tool.\n\n\nOBJECTIVES\nTo review the literature regarding studies exploring acute and chronic clinical effects of SMFR.\n\n\nMETHODS\nPubMed and Google Scholar databases were searched during February 2015 for studies containing words related to the topic of SMFR.\n\n\nRESULTS\nAcutely, SMFR seems to increase flexibility and reduce muscle soreness but does not impede athletic performance. It may lead to improved arterial function, improved vascular endothelial function, and increased parasympathetic nervous system activity acutely, which could be useful in recovery. There is conflicting evidence whether SMFR can improve flexibility long-term.\n\n\nCONCLUSION\nSMFR appears to have a range of potentially valuable effects for both athletes and the general population, including increasing flexibility and enhancing recovery.", "title": "" }, { "docid": "321309a290260c353de1c1e8a84ccb22", "text": "The eld of Qualitative Spatial Reasoning is now an active research area in its own right within AI (and also in Geographical Information Systems) having grown out of earlier work in philosophical logic and more general Qualitative Reasoning in AI. In this paper (which is an updated version of 25]) I will survey the state of the art in Qualitative Spatial Reasoning, covering representation and reasoning issues as well as pointing to some application areas. 1 What is Qualitative Reasoning? The principal goal of Qualitative Reasoning (QR) 129] is to represent not only our everyday commonsense knowledge about the physical world, but also the underlying abstractions used by engineers and scientists when they create quantitative models. Endowed with such knowledge, and appropriate reasoning methods , a computer could make predictions, diagnoses and explain the behaviour of physical systems in a qualitative manner, even when a precise quantitative description is not available 1 or is computationally intractable. The key to a qualitative representation is not simply that it is symbolic, and utilises discrete quantity spaces, but that the distinctions made in these discretisations are relevant to the behaviour being modelled { i.e. distinctions are only introduced if they are necessary to model some particular aspect of the domain with respect to the task in hand. Even very simple quantity spaces can be very useful, e.g. the quantity space consisting just of f?; 0; +g, representing the two semi-open intervals of the real number line, and their dividing point, is widely used in the literature, e.g. 129]. Given such a quantity space, one then wants to be able to compute with it. There is normally a natural ordering (either partial or total) associated with a quantity space, and one form of simple but eeective inference 1 Note that although one use for qualitative reasoning is that it allows inferences to be made in the absence of complete knowledge, it does this not by probabilistic or fuzzy techniques (which may rely on arbitrarily assigned probabilities or membership values) but by refusing to diierentiate between quantities unless there is suucient evidence to do so; this is achieved essentially by collapsingìndistinguishable' values into an equivalence class which becomes a qualitative quantity. (The case where the indistinguishability relation is not an equivalence relation has not been much considered, except by 86, 83].)", "title": "" }, { "docid": "e67b75e11ca6dd9b4e6c77b3cb92cceb", "text": "The incidence of malignant melanoma continues to increase worldwide. This cancer can strike at any age; it is one of the leading causes of loss of life in young persons. Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. New developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in clinical diagnostic ability to the point that melanoma can be detected in the clinic at the very earliest stages. The global adoption of this technology has allowed accumulation of large collections of dermoscopy images of melanomas and benign lesions validated by histopathology. The development of advanced technologies in the areas of image processing and machine learning have given us the ability to allow distinction of malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow not only earlier detection of melanoma, but also reduction of the large number of needless and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, widespread implementation must await further technical progress in accuracy and reproducibility. In this paper, we provide an overview of computerized detection of melanoma in dermoscopy images. First, we discuss the various aspects of lesion segmentation. Then, we provide a brief overview of clinical feature segmentation. Finally, we discuss the classification stage where machine learning algorithms are applied to the attributes generated from the segmented features to predict the existence of melanoma.", "title": "" }, { "docid": "1978a6937e8fbe1e61b638a75584832f", "text": "This paper proposes a novel algorithm to identify three inertial parameters: sprung mass, yaw moment of inertia, and longitudinal position of the center of gravity. A four-wheel nonlinear vehicle model with roll dynamics and a correlation between the inertial parameters is used for a dual unscented Kalman filter to simultaneously identify the inertial parameters and the vehicle state. A local observability analysis on the nonlinear vehicle model is used to activate and deactivate different modes of the proposed algorithm. Extensive CarSim simulations and experimental tests show the performance and robustness of the proposed approach on a flat road with a constant tire-road friction coefficient.", "title": "" }, { "docid": "2cc1383f98adb6f9e522fe2b933d35e5", "text": "This paper presents the innovative design of an air cooled permanent magnet assisted synchronous reluctance machine (PMaSyRM) for automotive traction application. Key design features include low cost ferrite magnets in an optimized rotor geometry with high saliency ratio, low weight and sufficient mechanical strength as well as a tailored hairpin stator winding in order to meet the demands of an A-segment battery electric vehicle (BEV). Effective torque ripple reduction techniques are analyzed and a suitable combination is chosen to keep additional manufacturing measures as low as possible. Although the ferrite magnets exhibit low remanence, it is shown that their contribution to the electrical machine's performance is essential in the field weakening region. Efficiency optimized torque-speed-characteristics are identified, including additional losses of the inverter, showing an overall system efficiency of more than 94 %. Lastly, the results of no load measurements of a prototype are compared to the FEM simulation results, indicating the proposed design of a PMaSyRM as a cost-effective alternative to state-of-the-art permanent magnet synchronous machines (PMSM) for vehicle traction purposes.", "title": "" }, { "docid": "41d338dd3a1d0b37e9050d0fcdb27569", "text": "Loneliness and depression are associated, in particular in older adults. Less is known about the role of social networks in this relationship. The present study analyzes the influence of social networks in the relationship between loneliness and depression in the older adult population in Spain. A population-representative sample of 3535 adults aged 50 years and over from Spain was analyzed. Loneliness was assessed by means of the three-item UCLA Loneliness Scale. Social network characteristics were measured using the Berkman–Syme Social Network Index. Major depression in the previous 12 months was assessed with the Composite International Diagnostic Interview (CIDI). Logistic regression models were used to analyze the survey data. Feelings of loneliness were more prevalent in women, those who were younger (50–65), single, separated, divorced or widowed, living in a rural setting, with a lower frequency of social interactions and smaller social network, and with major depression. Among people feeling lonely, those with depression were more frequently married and had a small social network. Among those not feeling lonely, depression was associated with being previously married. In depressed people, feelings of loneliness were associated with having a small social network; while among those without depression, feelings of loneliness were associated with being married. The type and size of social networks have a role in the relationship between loneliness and depression. Increasing social interaction may be more beneficial than strategies based on improving maladaptive social cognition in loneliness to reduce the prevalence of depression among Spanish older adults.", "title": "" }, { "docid": "73e4fed83bf8b1f473768ce15d6a6a86", "text": "Improving science, technology, engineering, and mathematics (STEM) education, especially for traditionally disadvantaged groups, is widely recognized as pivotal to the U.S.'s long-term economic growth and security. In this article, we review and discuss current research on STEM education in the U.S., drawing on recent research in sociology and related fields. The reviewed literature shows that different social factors affect the two major components of STEM education attainment: (1) attainment of education in general, and (2) attainment of STEM education relative to non-STEM education conditional on educational attainment. Cognitive and social psychological characteristics matter for both major components, as do structural influences at the neighborhood, school, and broader cultural levels. However, while commonly used measures of socioeconomic status (SES) predict the attainment of general education, social psychological factors are more important influences on participation and achievement in STEM versus non-STEM education. Domestically, disparities by family SES, race, and gender persist in STEM education. Internationally, American students lag behind those in some countries with less economic resources. Explanations for group disparities within the U.S. and the mediocre international ranking of US student performance require more research, a task that is best accomplished through interdisciplinary approaches.", "title": "" }, { "docid": "1be6aecdc3200ed70ede2d5e96cb43be", "text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.", "title": "" }, { "docid": "1453350c8134ecfe272255b71e7707ad", "text": "Program slicing is a viable method to restrict the focus of a task to specific sub-components of a program. Examples of applications include debugging, testing, program comprehension, restructuring, downsizing, and parallelization. This paper discusses different statement deletion based slicing methods, together with algorithms and applications to software engineering.", "title": "" }, { "docid": "b551478bd5897278fb5e3ff6acfb9fa6", "text": "In this paper, a back-electromotive force-based method is used in conjunction with three very low-cost integrated circuits based on the Hall-effect to estimate the rotor position of the direct-drive permanent-magnet synchronous motors (PMSMs) to be used for traction purpose of an electric wheelchair. A speed estimator based on a rotor frame machine model has been implemented for the PMSM drives, and then the rotor position estimation is achieved by means of a discrete integration of the estimated speed. The three Hall effect sensors are used to detect the rotor initial position as well to reset the error on the rotor position estimation every 60 electrical degrees. In the specific application, support vector machine technique has been implemented together with the field oriented control. Simulations and experimental results are shown in the paper", "title": "" }, { "docid": "843816964f6862bee7981229ccaf6432", "text": "We present a practical approach to global motion planning and terrain assessment for ground robots in generic three-dimensional (3D) environments, including rough outdoor terrain, multilevel facilities, and more complex geometries. Our method computes optimized six-dimensional trajectories compliant with curvature and continuity constraints directly on unordered point cloud maps, omitting any kind of explicit surface reconstruction, discretization, or topology extraction. We assess terrain geometry and traversability on demand during motion planning, by fitting robot-sized planar patches to the map and analyzing the local distribution of map points. Our motion planning approach consists of sampling-based initial trajectory generation, followed by precise local optimization according to a custom cost measure, using a novel, constraint-aware trajectory optimization paradigm. We embed these methods in a complete autonomous navigation system based on localization and mapping by means of a 3D laser scanner and iterative closest point matching, suitable for both static and dynamic environments. The performance of the planning and terrain assessment algorithms is evaluated in offline experiments using recorded and simulated sensor data. Finally, we present the results of navigation experiments in three different environments—rough outdoor terrain, a two-level parking garage, and a dynamic environment, demonstrating how the proposed methods enable autonomous navigation in complex 3D terrain.", "title": "" }, { "docid": "ae38bb46fd3ceed3f4800b6421b45d74", "text": "Medicinal data mining methods are used to analyze the medical data information resources. Medical data mining content mining and structure methods are used to analyze the medical data contents. The effort to develop knowledge and experience of frequent specialists and clinical selection data of patients collected in databases to facilitate the diagnosis process is considered a valuable option. Diagnosis of heart disease is a significant and tedious task in medicine. The term Heart disease encompasses the various diseases that affect the heart. The exposure of heart disease from various factors or symptom is an issue which is not complimentary from false presumptions often accompanied by unpredictable effects. Association rule mining procedures are used to extract item set relations. Item set regularities are used in the rule mining process. The data classification is based on MAFIA algorithms which result in accuracy, the data is evaluated using entropy based cross validations and partition techniques and the results are compared. Here using the C4.5 algorithm as the training algorithm to show rank of heart attack with the decision tree. Finally, the heart disease database is clustered using the K-means clustering algorithm, which will remove the data applicable to heart attack from the database. The results showed that the medicinal prescription and designed prediction system is capable of prophesying the heart attack successfully.", "title": "" }, { "docid": "fc25adc42c7e4267a9adfe13ddcabf75", "text": "As automotive electronics have increased, models for predicting the transmission characteristics of wiring harnesses, suitable for the automotive EMC tests, are needed. In this paper, the repetitive structures of the cross-sectional shape of the twisted pair cable is focused on. By taking account of RLGC parameters, a theoretical analysis modeling for whole cables, based on multi-conductor transmission line theory, is proposed. Furthermore, the theoretical values are compared with measured values and a full-wave simulator. In case that a twisted pitch, a length of the cable, and a height of reference ground plane are changed, the validity of the proposed model is confirmed.", "title": "" }, { "docid": "e66e7677aa769135a6a9b9ea5c807212", "text": "At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.", "title": "" }, { "docid": "7feea3bcba08a889ba779a23f79556d7", "text": "In this report, monodispersed ultra-small Gd2O3 nanoparticles capped with hydrophobic oleic acid (OA) were synthesized with average particle size of 2.9 nm. Two methods were introduced to modify the surface coating to hydrophilic for bio-applications. With a hydrophilic coating, the polyvinyl pyrrolidone (PVP) coated Gd2O3 nanoparticles (Gd2O3-PVP) showed a reduced longitudinal T1 relaxation time compared with OA and cetyltrimethylammonium bromide (CTAB) co-coated Gd2O3 (Gd2O3-OA-CTAB) in the relaxation study. The Gd2O3-PVP was thus chosen for its further application study in MRI with an improved longitudinal relaxivity r1 of 12.1 mM(-1) s(-1) at 7 T, which is around 3 times as that of commercial contrast agent Magnevist(®). In vitro cell viability in HK-2 cell indicated negligible cytotoxicity of Gd2O3-PVP within preclinical dosage. In vivo MR imaging study of Gd2O3-PVP nanoparticles demonstrated considerable signal enhancement in the liver and kidney with a long blood circulation time. Notably, the OA capping agent was replaced by PVP through ligand exchange on the Gd2O3 nanoparticle surface. The hydrophilic PVP grants the Gd2O3 nanoparticles with a polar surface for bio-application, and the obtained Gd2O3-PVP could be used as an in vivo indicator of reticuloendothelial activity.", "title": "" } ]
scidocsrr
16ff11f34b38693f929c69700d37ef81
Risky Business: Emotion, Decision-Making, and Addiction
[ { "docid": "c9d46300b513bca532ec080371511313", "text": "On a gambling task that models real-life decisions, patients with bilateral lesions of the ventromedial prefrontal cortex (VM) opt for choices that yield high immediate gains in spite of higher future losses. In this study, we addressed three possibilities that may account for this behaviour: (i) hypersensitivity to reward; (ii) insensitivity to punishment; and (iii) insensitivity to future consequences, such that behaviour is always guided by immediate prospects. For this purpose, we designed a variant of the original gambling task in which the advantageous decks yielded high immediate punishment but even higher future reward. The disadvantageous decks yielded low immediate punishment but even lower future reward. We measured the skin conductance responses (SCRs) of subjects after they had received a reward or punishment. Patients with VM lesions opted for the disadvantageous decks in both the original and variant versions of the gambling task. The SCRs of VM lesion patients after they had received a reward or punishment were not significantly different from those of controls. In a second experiment, we investigated whether increasing the delayed punishment in the disadvantageous decks of the original task or decreasing the delayed reward in the disadvantageous decks of the variant task would shift the behaviour of VM lesion patients towards an advantageous strategy. Both manipulations failed to shift the behaviour of VM lesion patients away from the disadvantageous decks. These results suggest that patients with VM lesions are insensitive to future consequences, positive or negative, and are primarily guided by immediate prospects. This 'myopia for the future' in VM lesion patients persists in the face of severe adverse consequences, i.e. rising future punishment or declining future reward.", "title": "" } ]
[ { "docid": "c1e946b4aaf8ce2d6de18b335c6434c4", "text": "The caption prediction task is in 2018 in its second edition after the task was first run in the same format in 2017. For 2018 the database was more focused on clinical images to limit diversity. As automatic methods with limited manual control were used to select images, there is still an important diversity remaining in the image data set. Participation was relatively stable compared to 2017. Usage of external data was restricted in 2018 to limit critical remarks regarding the use of external resources by some groups in 2017. Results show that this is a difficult task but that large amounts of training data can make it possible to detect the general topics of an image from the biomedical literature. For an even better comparison it seems important to filter the concepts for the images that are made available. Very general concepts (such as “medical image”) need to be removed, as they are not specific for the images shown, and also extremely rare concepts with only one or two examples can not really be learned. Providing more coherent training data or larger quantities can also help to learn such complex models.", "title": "" }, { "docid": "1a0c0423474888301270078e33bb7223", "text": "In wireless sensor networks (WSN’s), our main aim is to ensure the confidentiality of the data which has been sensed , aggregated and communicated to the base node. This will be achieved using key management. Symmetric key management uses only one key for encryption and decryption.Symmetric key management is generally preferred in WSN, because it will consume less battery power, memory and induces less computation overhead. Asymmetric key cryptography uses two separate keys , for encryption and decryption but those two keys are interconnected with complex mathematical algorithm. Since it is using complex mathematical algorithms it will induce huge overhead on power , computation and memory Asymmetric key cryptography is not preferred in WSN. Asymmetrickey cryptography is more secured and efficient when compared to symmetric key cryptography, this paper analyzes various opportunities of implementing asymmetric cryptography in wireless sensor networks.(WSN : Wireless Sensor Networks) I.Introduction. A.Key Management. Key management is systematic process of generating , distributing keys to the various sensor nodes , if the nodes gets compromised we need to revocate the keys to those compromised nodes. The principal concerns regarding the key management framework are as follows:  Key deployment/pre-distribution: it is process of generating keys and deploying it in the individual nodes.  Key establishment: Here, the methods by which any pair of nodes or a group of nodes establishes a secure session are discussed.  Member/node addition: This process of adding a node to the network, extra node added must be able to establish secure connections with the other nodes.  Member/node eviction: This process eliminates the node from the network such that it will not again be able to establish secure sessions with any of the existing nodes in the network. Moreover, the node will not to decipher future congestion in the network .  Key Revocation: Process of re issuing fresh keys to the compromised nodes. B.Types of key Management. 1.Symmetric Key Management : Here only one type of key is preferred to authenticate the nodes. Key pre distribution is widely preferred in WSN. Symmetric key cryptography uses relatively simple mathematical operations and hence it demands less computation power , and eventually consumes less battery power. Since its using simpler keys it will give less overhead in terms of memory. Since its consuming fewer resources it is widely preferred in wireless sensor networks.[10] 2.Asymmetric Key Management : it uses two different keys , public key and private key for encryption and decryption. Two keys are mathematically linked with each other, Asymmetric key cryptography incurs huge overhead in terms of memory computation power and battery and hence its generally not preferred in WSN.In the rest of the paper , we discuss more about asymmetric key cryptography and the ways in which it can be implemented in WSN.[10] Asymmetric Key Cryptography was introduced first by W.Diffie and M. Hellman presents the main idea of a public key cryptosystem. e-ISSN : 0975-4024 U.SenthilKumaran et al. / International Journal of Engineering and Technology (IJET) p-ISSN : 2319-8613 Vol 8 No 2 Apr-May 2016 859 Figure 1: Public-key cryptosystem Asymmetric key cryptography prefers a pair of keys: a public key and a private key.public key can be broadcasted to all nodes whereas the private key was kept as secret.We use mathematical logic to link public and private keys. Though the public key was broadcasted to every node, hacker will not be able to retrieve private key only with the help of public key. Assume that messageMwas encrypted with the known public key. To decrypt the cipher textC we need private key.occasionally we prefer to encrypt using private key and decrypt using public key., We need to simplify this scheme to use it in the wireless scenario .we can replace the trusted third party by the network deployer. Network deployer can act as trusted third party and deploy nodes with an private/public key pair. Every node present in the network can broad cast its public key to its neighbors if requiredto encrypt messages. However, in order to authenticate the public key there is a need for a trusted Certificate Authority which issues appropriate certificates. Among all public key algorithms, there are three established families of practical relevance. The security of these systems is based on hard mathematic problems: RSA Is named after its inventorsRivest, Shamir and Adleman [6][8]. This algorithm employs two large prime numbers P and Q .The strength of this scheme is based on the difficulty of finding these large prime numbers which is essential to find the secret key whereas the public key can be distributed freely .this algorithm as variety of applications on the normal conventional network, and is widely used in e commerce. Since we use large prime numbers and factoring as process , it demands high operational requirements in terms of resources. It will take toll on computational power memory, and the longevity of the operation demands power supply as well. All three resources are crucial in wireless sensor networks and hence its very tedious employ this algorithm in wireless sensor network security. Discrete logarithm :Elgamal proposed another group of asymmetric cryptography algorithm which is widely used in the security of conventional networks.the strength of this scheme relies upon the difficulty in finding the logarithms on the finite field. Resource usage of these algorithm is comparable RSA algorithm, hence it is difficult to implement this algorithm in wireless sensor networks. Elliptic Curve Cryptography is based on the algebraic structure of elliptic curve used on the finite field , the size the key will depend upon the size of the curve preferred strength of this method relies on the difficulty of the discrete logarithm problem in this setting (ECDLP).it has smaller key size when compared to the non elliptical key methods. Therefore it is considered the most attractive family for embedded devices, however the use of this method in wireless sensor networks is still tedious due to resource constraints. As we discussed in the abstract, public key cryptography techniques are lot more tedious because of the scarcity of resources in wireless Sensor networks. Few researchers declared that implementation of Public Key Cryptography sensor nodes are not feasible Hence hard-ware support may be needed for public key operations. Researchers concentrated on improving the hardware quality in wireless sensor networks in order to fit asymmetric cryptography in wireless sensor networks.. None of the sensor node platforms currently available on the market provides hardware support for Public Key Cryptography. Any change in the hardware may influence the size of the sensors as well as the cost There is aserious demand for cost effective Public Key Cryptography hardware solutions for sensor nodes. Meanwhile researchers concentrated more on , software based solutions for asymmetric key management.With respect to wireless sensor networks , asymmetric key cryptography would seem to be the best method for key broadcasting process. It gives necessary security and also provides improved scalability and resilience when compared the symmetric key solutions. Despite all these advantages, the application of public key cryptography protocols in WSNs remains challenging. All three asymmetric cryptography algorithms discussed , require e-ISSN : 0975-4024 U.SenthilKumaran et al. / International Journal of Engineering and Technology (IJET) p-ISSN : 2319-8613 Vol 8 No 2 Apr-May 2016 860 authentication of public keys before key distribution. Implementing this public key cryptography solutions will require presence of a public key infrastructure in which certificate authorities authenticate public keys. II.Public Key Infrastructure Security of the Public-key encryption schemes are valid only if the authenticity of the public key is assured. This service is provided with the use of certificate schemes. In cryptography, a public key infrastructure is a methodology that connects public keys with corresponding user identities by means of a Certificate Authority. The main task of a PKI is to create, manage, store, distribute and revoke digital certificates. A typical public key infrastructure consists of: Certificate Authority (CA) third party which provides and verifies digital certificate Registration Authority (RA) –it acts as verifier for (CA)and performs initial authentication before the digital certificates was issued. Certificate repository there can be one or more directories where certificates (with their public keys) are stored together with Certificate Revocation Lists (CLRs); Certificate management system. In order to participate in a public key infrastructure, user A must first enroll or register with the Registration Authority. The registration authority validates the user’sidentity and forwards his public key to the Certificate Authority. Main aim of the Certificate Authority is to attach the public key and the identifying information and credentials supplied by the Registration Authority. Public key was generated at the end of this process.. The binding is declared when a trusted Certificate Authority digitally signs the public key certificate with its own private key. Certificate authority provides a digital certificate for each user that uniquely identifies each and every user. When we try to establish the public key infrastructure in wireless sensor networks In the context of wireless sensor networks we have to face many implementation concerns., In addition to the energy , resource , time and transmission constraints WSN also suffers from the lack of trusted infrastructure. Sensor nodes are generally deployed in adverse conditions and hence it is also difficult to deploy certificate authority. Even if we deploy certificate aut", "title": "" }, { "docid": "4186e2c50355516bf8860a7fea4415cc", "text": "Performing approximate data matching has always been an intriguing problem for both industry and academia. This task becomes even more challenging when the requirement of data privacy rises. In this paper, we propose a novel technique to address the problem of efficient privacy-preserving approximate record linkage. The secure framework we propose consists of two basic components. First, we utilize a secure blocking component based on phonetic algorithms statistically enhanced to improve security. Second, we use a secure matching component where actual approximate matching is performed using a novel private approach of the Levenshtein Distance algorithm. Our goal is to combine the speed of private blocking with the increased accuracy of approximate secure matching. Category: Ubiquitous computing; Security and privacy", "title": "" }, { "docid": "f447c062b72bf4fcb559ba30621464be", "text": "The acute fish test is an animal test whose ecotoxicological relevance is worthy of discussion. The primary aim of protection in ecotoxicology is the population and not the individual. Furthermore the concentration of pollutants in the environment is normally not in the lethal range. Therefore the acute fish test covers solely the situation after chemical spills. Nevertheless, acute fish toxicity data still belong to the base set used for the assessment of chemicals. The embryo test with the zebrafish Danio rerio (DarT) is recommended as a substitute for the acute fish test. For validation an international laboratory comparison test was carried out. A summary of the results is presented in this paper. Based on the promising results of testing chemicals and waste water the test design was validated by the DIN-working group \"7.6 Fischei-Test\". A normed test guideline for testing waste water with fish is available. The test duration is short (48 h) and within the test different toxicological endpoints can be examined. Endpoints from the embryo test are suitable for QSAR-studies. Besides the use in ecotoxicology the introduction as a toxicological model was investigated. Disturbance of pigmentation and effects on the frequency of heart-beat were examined. A further important application is testing of teratogenic chemicals. Based on the results DarT could be a screening test within preclinical studies.", "title": "" }, { "docid": "c78520bf3a6b46a314792eb72f34f080", "text": "This paper presents a small-signal ac modeling of a feedback system for a flyback light-emitting diode (LED) driver. From this analysis, a dimming-feedback control method, which has a constant loop gain at dc, is proposed for dimmable LED drivers. Since the proposed method controls the steady-state error of the feedback system, an additional phase angle detection circuit and a LED current adjusting circuit are not required to control the output LED current according to the phase angle of the input voltage in a TRIAC-dimmable LED driver. Therefore, the proposed control method provides a simple structure, small size, and low bill of materials. A prototype of a flyback LED driver with the proposed dimming-feedback control method is implemented and experimented on to verify the validity of the analysis and the proposed control method. The prototype shows that the proposed control method varies the output LED current according to the phase angle of the TRIAC dimmer with no additional circuits. Thus, the proposed control method has high compatibility with TRIAC dimmers and can be applied to any dimmable LED driver with a nondimmable control integrated circuit.", "title": "" }, { "docid": "6e893839d1d4698698d38eb18073251a", "text": "Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multilingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.", "title": "" }, { "docid": "89673b068d58f02f794be3cf6fc45f62", "text": "Graphical models are used to depict relevant aspects of real-world domains intended to be supported by an information system. Various approaches for modeling exist and approaches such as object-oriented and process-oriented modeling methods are in widespread use. These modeling methods differ in their expressive power as well as in their complexity of use, thereby leading to an important investment decision for organizations seeking to conduct modeling projects. In this paper, we used an established approach for evaluating the complexity of conceptual modeling methods and compared two important industry standards for modeling, Unified Modeling Language and Business Process Modeling Notation, based on their complexity. Our research finds that BPMN has very high levels of complexity when contrasted with UML.", "title": "" }, { "docid": "c6576bb8585fff4a9ac112943b1e0785", "text": "Three-dimensional (3D) kinematic models are widely-used in videobased figure tracking. We show that these models can suffer from singularities when motion is directed along the viewing axis of a single camera. The single camera case is important because it arises in many interesting applications, such as motion capture from movie footage, video surveillance, and vision-based user-interfaces. We describe a novel two-dimensional scaled prismatic model (SPM) for figure registration. In contrast to 3D kinematic models, the SPM has fewer singularity problems and does not require detailed knowledge of the 3D kinematics. We fully characterize the singularities in the SPM and demonstrate tracking through singularities using synthetic and real examples. We demonstrate the application of our model to motion capture from movies. Fred Astaire is tracked in a clip from the film “Shall We Dance”. We also present the use of monocular hand tracking in a 3D user-interface. These results demonstrate the benefits of the SPM in tracking with a single source of video. KEY WORDS—AUTHOR: PLEASE PROVIDE", "title": "" }, { "docid": "20ec78dfbfe5b9709f25bd28e0e66e8d", "text": "BACKGROUND\nElectronic medical records (EMRs) contain vast amounts of data that is of great interest to physicians, clinical researchers, and medial policy makers. As the size, complexity, and accessibility of EMRs grow, the ability to extract meaningful information from them has become an increasingly important problem to solve.\n\n\nMETHODS\nWe develop a standardized data analysis process to support cohort study with a focus on a particular disease. We use an interactive divide-and-conquer approach to classify patients into relatively uniform within each group. It is a repetitive process enabling the user to divide the data into homogeneous subsets that can be visually examined, compared, and refined. The final visualization was driven by the transformed data, and user feedback direct to the corresponding operators which completed the repetitive process. The output results are shown in a Sankey diagram-style timeline, which is a particular kind of flow diagram for showing factors' states and transitions over time.\n\n\nRESULTS\nThis paper presented a visually rich, interactive web-based application, which could enable researchers to study any cohorts over time by using EMR data. The resulting visualizations help uncover hidden information in the data, compare differences between patient groups, determine critical factors that influence a particular disease, and help direct further analyses. We introduced and demonstrated this tool by using EMRs of 14,567 Chronic Kidney Disease (CKD) patients.\n\n\nCONCLUSIONS\nWe developed a visual mining system to support exploratory data analysis of multi-dimensional categorical EMR data. By using CKD as a model of disease, it was assembled by automated correlational analysis and human-curated visual evaluation. The visualization methods such as Sankey diagram can reveal useful knowledge about the particular disease cohort and the trajectories of the disease over time.", "title": "" }, { "docid": "6b72358b5cbbe349ee09f88773762ab1", "text": "Estimating virtual CT(vCT) image from MRI data is in crucial need for medical application due to the relatively high dose of radiation exposure in CT scan and redundant workflow of both MR and CT. Among the existing work, the fully convolutional neural network(FCN) shows its superiority in generating vCT of high fidelity which merits further investigation. However, the most widely used evaluation metrics mean absolute error (MAE) and peak signal to noise ratio (PSNR) may not be adequate enough to reflect the structure quality of the vCT, while most of the current FCN based approaches focus more on the architectures but have little attention on the loss functions which are closely related to the final evaluation. The objective of this thesis is to apply Structure Similarity(SSIM) as loss function for predicting vCT from MRI based on FCN and see whether the prediction has improvement in terms of structure compared with conventionally used l or l loss. Inspired by the SSIM, the contextual l has been proposed to investigate the impact of introducing context information to the loss function. CT data was non-rigidly registered to MRI for training and evaluation. Patch-based 3D FCN were optimized for different loss functions to predict vCT from MRI data. Specifically for optimizing SSIM, the training data should be normalization to [0, 1] and architecture should be slightly changed by adding the ReLu layer before the output to guarantee the convexity of the SSIM during the training. The evaluation results are carried out with 7-folds cross validation of the 14 patients. MAE, PSNR and SSIM for the whole volume and tissue-wise are evaluted respectively. All optimizations successfully converged well and cl outperformed the other losses in terms of PSNR and MAE but with the worst SSIM, DSSIM works better at preserving the structures and resulting in smooth output. Yuan Zhou Delft, August 2017", "title": "" }, { "docid": "5cde30d7be98b6247e5f856a3bc898a7", "text": "A novel ridge-port Rotman-lens is described, which operates as a lens with tapered slot-line ports. The lens parallel-plates mirror the ridge-ports to tapered slot-line ports. The lens height is half the height of the antenna array row, and two lenses can be stacked and feed one dual-polarized antenna array row, thus yielding a compact antenna system. The lens is air-filled, so it is easy to manufacture and repeatable in performance with no dielectric tolerances and losses, and it is lightweight compared to a dielectric lens. The lens with elongated tapered ports operates down to the antenna array low frequency, thus utilizing the large antenna bandwidth. These features make the ridge-port air-filled lens more useful than a conventional microstrip Rotman lens.", "title": "" }, { "docid": "2439ce82bb2008fb0495f8a0ad6553fc", "text": "This paper presents a switched state-space modeling approach for a switched-capacitor power amplifier. In contrast to state of the art behavioral models for nonlinear devices like power amplifiers, the state-space representation allows a straightforward inclusion of the nonidealities of the applied input sources. Hence, adding noise on a power supply or phase distortions on the carrier signal do not require a redesign of the mathematical model. The derived state-space model (SSM), which can be efficiently implemented in any numerical simulation tool, allows a significant reduction of the required simulation run-time (14x speedup factor) with respect to standard Cadence Spectre simulations. The derived state-space model (SSM) has been implemented in MATLAB/Simulink and its results have been verified by comparison with Cadence Spectre simulations.", "title": "" }, { "docid": "7ddfa92cee856e2ef24caf3e88d92b93", "text": "Applications are getting increasingly interconnected. Although the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience.", "title": "" }, { "docid": "ee96b4c7d15008f4b8831ecf2d337b1d", "text": "This paper proposes the identification of regions of interest in biospeckle patterns using unsupervised neural networks of the type Self-Organizing Maps. Segmented images are obtained from the acquisition and processing of laser speckle sequences. The dynamic speckle is a phenomenon that occurs when a beam of coherent light illuminates a sample in which there is some type of activity, not visible, which results in a variable pattern over time. In this particular case the method is applied to the evaluation of bacterial chemotaxis. Image stacks provided by a set of experiments are processed to extract features of the intensity dynamics. A Self-Organizing Map is trained and its cells are colored according to a criterion of similarity. During the recall stage the features of patterns belonging to a new biospeckle sample impact on the map, generating a new image using the color of the map cells impacted by the sample patterns. It is considered that this method has shown better performance to identify regions of interest than those that use a single descriptor. To test the method a chemotaxis assay experiment was performed, where regions were differentiated according to the bacterial motility within the sample.", "title": "" }, { "docid": "96c10ca887c0210615d16655f62665e0", "text": "The two key challenges in hierarchical classification are to leverage the hierarchical dependencies between the class-labels for improving performance, and, at the same time maintaining scalability across large hierarchies. In this paper we propose a regularization framework for large-scale hierarchical classification that addresses both the problems. Specifically, we incorporate the hierarchical dependencies between the class-labels into the regularization structure of the parameters thereby encouraging classes nearby in the hierarchy to share similar model parameters. Furthermore, we extend our approach to scenarios where the dependencies between the class-labels are encoded in the form of a graph rather than a hierarchy. To enable large-scale training, we develop a parallel-iterative optimization scheme that can handle datasets with hundreds of thousands of classes and millions of instances and learning terabytes of parameters. Our experiments showed a consistent improvement over other competing approaches and achieved state-of-the-art results on benchmark datasets.", "title": "" }, { "docid": "72bbd468c00ae45979cce3b771e4c2bf", "text": "Twitter is a popular microblogging and social networking service with over 100 million users. Users create short messages pertaining to a wide variety of topics. Certain topics are highlighted by Twitter as the most popular and are known as “trending topics.” In this paper, we will outline methodologies of detecting and identifying trending topics from streaming data. Data from Twitter’s streaming API will be collected and put into documents of equal duration. Data collection procedures will allow for analysis over multiple timespans, including those not currently associated with Twitter-identified trending topics. Term frequency-inverse document frequency analysis and relative normalized term frequency analysis are performed on the documents to identify the trending topics. Relative normalized term frequency analysis identifies unigrams, bigrams, and trigrams as trending topics, while term frequcny-inverse document frequency analysis identifies unigrams as trending topics.", "title": "" }, { "docid": "ff0c3f9fa9033be78b107c2f052203fa", "text": "Complex networks, such as biological, social, and communication networks, often entail uncertainty, and thus, can be modeled as probabilistic graphs. Similar to the problem of similarity search in standard graphs, a fundamental problem for probabilistic graphs is to efficiently answer k-nearest neighbor queries (k-NN), which is the problem of computing the k closest nodes to some specific node. In this paper we introduce a framework for processing k-NN queries in probabilistic graphs. We propose novel distance functions that extend well-known graph concepts, such as shortest paths. In order to compute them in probabilistic graphs, we design algorithms based on sampling. During k-NN query processing we efficiently prune the search space using novel techniques. Our experiments indicate that our distance functions outperform previously used alternatives in identifying true neighbors in real-world biological data. We also demonstrate that our algorithms scale for graphs with tens of millions of edges.", "title": "" }, { "docid": "ba17adc705d92a5a7d6122a6bd25c732", "text": "Penile size is a major concern among men all over world. Men throughout history and still today, feel the need to enlarge their penis in order to improve their self-esteem and sexual performance. There are a variety of social, cultural, and psychological aspects regarding the size of men genitals, resulting such that, men often feel the need to enlarge their penis. “Bigger is better” is still a relevant belief in our days and based on the “phallic identity” – the tendency of males to seek their personality in their penis. This trend is supported by the numerous and still increasing number of penile enlargement procedures performed in the past years and today, generally in men with normal size penises. This condition is called “the locker room syndrome” – men concerned about their flaccid penile size even though in most cases their penile length and girth are normal. however, the surgical procedures available for changing penile appearance remains highly controversial mainly due to high complication rates and low satisfactory surgical outcomes.", "title": "" }, { "docid": "511c90eadbbd4129fdf3ee9e9b2187d3", "text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.", "title": "" }, { "docid": "59da726302c06abef243daee87cdeaa7", "text": "The present research aims at gaining a better insight on the psychological barriers to the introduction of social robots in society at large. Based on social psychological research on intergroup distinctiveness, we suggested that concerns toward this technology are related to how we define and defend our human identity. A threat to distinctiveness hypothesis was advanced. We predicted that too much perceived similarity between social robots and humans triggers concerns about the negative impact of this technology on humans, as a group, and their identity more generally because similarity blurs category boundaries, undermining human uniqueness. Focusing on the appearance of robots, in two studies we tested the validity of this hypothesis. In both studies, participants were presented with pictures of three types of robots that differed in their anthropomorphic appearance varying from no resemblance to humans (mechanical robots), to some body shape resemblance (biped humanoids) to a perfect copy of human body (androids). Androids raised the highest concerns for the potential damage to humans, followed by humanoids and then mechanical robots. In Study 1, we further demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause. In Study 2, we gained a clearer insight in the processes B Maria Paola Paladino mariapaola.paladino@unitn.it Francesco Ferrari francesco.ferrari-1@unitn.it Jolanda Jetten j.jetten@psy.uq.edu.au 1 Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy 2 School of Psychology, The University of Queensland, St Lucia, QLD 4072, Australia underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction and that this perception was responsible for the higher perceived damage to humans. Implications of these findings for social robotics are discussed.", "title": "" } ]
scidocsrr
f57249e139e59334f4f9ee0f45bb7026
Moving toward more perfect unions: daily and long-term consequences of approach and avoidance goals in romantic relationships.
[ { "docid": "8f2a4de3669b26af17cd127387769ad6", "text": "This research provides the first empirical investigation of how approach and avoidance motives for engaging in sex in intimate relationships are associated with personal well-being and relationship quality. A 2-week daily experience study of college student dating couples tested specific predictions from the theoretical model and included both longitudinal and dyadic components. Whereas approach sex motives were positively associated with personal and interpersonal well-being, avoidance sex motives were negatively associated with well-being. Engaging in sex for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner s motives for sex were also associated with well-being. Implications for the conceptualization of sexuality in relationships along these two dimensions are discussed. Sexual interactions in young adulthood can be positive forces that bring partners closer and make them feel good about themselves and their relationships. In the National Health and Social Life Survey (NHSLS), 78% of participants in monogamous dating relationships reported being either extremely or very pleased with their sexual relationship (Laumann, Gagnon, Michael, & Michaels, 1994). For instance, when asked to rate specific feelings they experienced after engaging in sex, a majority of the participants reported positive feelings (i.e., ‘‘felt loved,’’ ‘‘thrilled,’’ ‘‘wanted,’’ or ‘‘taken care of ’’). More generally, feelings of satisfaction with the sexual aspects of an intimate relationship contribute to overall relationship satisfaction and stability over time (e.g., Sprecher, 2002; see review by Sprecher & Cate, 2004). In short, sexual interactions can be potent forces that sustain and enhance intimate relationships. For some individuals and under certain circumstances, however, sexual interactions can be anything but positive and rewarding. They may create emotional distress, personal discontent, and relationship conflict. For instance, in the NHSLS, a sizable minority of respondents in dating relationships indicated that sex with an exclusive partner made them feel ‘‘sad,’’ ‘‘anxious and worried,’’ ‘‘scared and afraid,’’ or ‘‘guilty’’ (Laumann et al., 1994). Negative reactions to sex may stem from such diverse sources as prior traumatic or coercive experiences in relationships, feeling at a power disadvantage in one s current relationship, or discrepancies in sexual desire between partners, to name a few (e.g., Davies, Katz, & Jackson, 1999; Muehlenhard & Schrag, 1991). The studies reported here were based on Emily A. Impett s dissertation. Preparation of this article was supported by a fellowship awarded to the first author from the Sexuality Research Fellowship Program of the Social Science Research Council with funds provided by the Ford Foundation. We thank Katie Bishop, Renee Delgado, and Laura Tsang for their assistance with data collection and Andrew Christensen, Terri Conley, Martie Haselton, and Linda Sax for comments on an earlier version of this manuscript. Correspondence should be addressed to Emily A. Impett, Center for Research on Gender and Sexuality, San Francisco State University, 2017 Mission Street #300, San Francisco, CA 94110, e-mail: eimpett@sfsu.edu. Personal Relationships, 12 (2005), 465–482. Printed in the United States of America. Copyright 2005 IARR. 1350-4126=05", "title": "" }, { "docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5", "text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.", "title": "" } ]
[ { "docid": "fe3570c283fbf8b1f504e7bf4c2703a8", "text": "We propose ThalNet, a deep learning model inspired by neocortical communication via the thalamus. Our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. We show that our model learns to route information hierarchically, processing input data by a chain of modules. We observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. Our model outperforms standard recurrent neural networks on several sequential benchmarks.", "title": "" }, { "docid": "b537af893b84a4c41edb829d45190659", "text": "We seek a complete description for the neurome of the Drosophila, which involves tracing more than 20,000 neurons. The currently available tracings are sensitive to background clutter and poor contrast of the images. In this paper, we present Tree2Tree2, an automatic neuron tracing algorithm to segment neurons from 3D confocal microscopy images. Building on our previous work in segmentation [1], this method uses an adaptive initial segmentation to detect the neuronal portions, as opposed to a global strategy that often results in under segmentation. In order to connect the disjoint portions, we use a technique called Path Search, which is based on a shortest path approach. An intelligent pruning step is also implemented to delete undesired branches. Tested on 3D confocal microscopy images of GFP labeled Drosophila neurons, the visual and quantitative results suggest that Tree2Tree2 is successful in automatically segmenting neurons in images plagued by background clutter and filament discontinuities.", "title": "" }, { "docid": "c4fa73bd2d6b06f4655eeacaddf3b3a7", "text": "In recent years, the robotic research area has become extremely prolific in terms of wearable active exoskeletons for human body motion assistance, with the presentation of many novel devices, for upper limbs, lower limbs, and the hand. The hand shows a complex morphology, a high intersubject variability, and offers limited space for physical interaction with a robot: as a result, hand exoskeletons usually are heavy, cumbersome, and poorly usable. This paper introduces a novel device designed on the basis of human kinematic compatibility, wearability, and portability criteria. This hand exoskeleton, briefly HX, embeds several features as underactuated joints, passive degrees of freedom ensuring adaptability and compliance toward the hand anthropometric variability, and an ad hoc design of self-alignment mechanisms to absorb human/robot joint axes misplacement, and proposes a novel mechanism for the thumb opposition. The HX kinematic design and actuation are discussed together with theoretical and experimental data validating its adaptability performances. Results suggest that HX matches the self-alignment design goal and is then suited for close human-robot interaction.", "title": "" }, { "docid": "3d310295592775bbe785692d23649c56", "text": "BACKGROUND\nEvidence indicates that sexual assertiveness is one of the important factors affecting sexual satisfaction. According to some studies, traditional gender norms conflict with women's capability in expressing sexual desires. This study examined the relationship between gender roles and sexual assertiveness in married women in Mashhad, Iran.\n\n\nMETHODS\nThis cross-sectional study was conducted on 120 women who referred to Mashhad health centers through convenient sampling in 2014-15. Data were collected using Bem Sex Role Inventory (BSRI) and Hulbert index of sexual assertiveness. Data were analyzed using SPSS 16 by Pearson and Spearman's correlation tests and linear Regression Analysis.\n\n\nRESULTS\nThe mean scores of sexual assertiveness was 54.93±13.20. According to the findings, there was non-significant correlation between Femininity and masculinity score with sexual assertiveness (P=0.069 and P=0.080 respectively). Linear regression analysis indicated that among the predictor variables, only Sexual function satisfaction was identified as the sexual assertiveness summary predictor variables (P=0.001).\n\n\nCONCLUSION\nBased on the results, sexual assertiveness in married women does not comply with gender role, but it is related to Sexual function satisfaction. So, counseling psychologists need to consider this variable when designing intervention programs for modifying sexual assertiveness and find other variables that affect sexual assertiveness.", "title": "" }, { "docid": "26131d574bc3f440aa33b5eafa66c39c", "text": "The meeting of the oldest profession with modern slavery is the topic of this paper. After a brief introduction to prostitution and prostitution-related human trafficking, this paper focuses on the Dutch policy debate. A System Dynamics simulation model related to the Dutch situation developed to explore and provide insights related to the effects of proposed policies is presented in this paper. Using the simulation model, a ‘quick and dirty’ policy analysis is first of all performed, and preliminary conclusions are drawn. These preliminary conclusions are further tested under uncertainty, using two different but relatively similar simulation models. The main conclusions are that demand side measures are necessary, but not sufficient. The topic is so complex and uncertain that simple (combinations of) basic policies will not hold in all circumstances, which is why this topic requires further exploration and policy testing under deep uncertainty.", "title": "" }, { "docid": "8fdbbd5cde4c32f935a79760d9d87a9c", "text": "This paper presents a review of the advances in strong motion recording since the early 1930s, based mostly on the experiences in the United States. A particular emphasis is placed on the amplitude and spatial resolution of recording, which both must be “adequate” to capture the nature of strong earthquake ground motion and response of structures. The first strong motion accelerographs had optical recording system, dynamic range of about 50 dB and useful life longer than 30 years. Digital strong motion accelerographs started to become available in the late 1970’s. Their dynamic range has been increasing progressively, and at present is about 135 dB. Most models have had useful life shorter than 5 to 10 years. One benefit from a high dynamic range is early trigger and anticipated ability to compute permanent displacements. Another benefit is higher sensitivity and hence a possibility to record smaller amplitude motions (aftershocks, smaller local earthquakes and distant large earthquakes), which would augment significantly the strong motion databases. The present trend of upgrading existing and adding new stations with high dynamic range accelerographs has lead to deployment of relatively small We dedicate this paper to Donald E. Hudson (1916-1999), a pioneer in the field of Earthquake Engineering, and our teacher and mentor. His contributions to academic research and development of earthquake instrumentation are without parallel. With a rare ability to attract, motivate and support yo ung scientists, he created a long and impressive list of Ph.D. graduates who are now professors, researchers and leaders in Earthquake Engineering.", "title": "" }, { "docid": "a986826041730d953dfbf9fbc1b115a6", "text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "title": "" }, { "docid": "c8453255bf200ed841229f5e637b2074", "text": "One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a ‘‘model discrepancy’’ term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c0e5be05d75b1a65c7367d6d0cf8d63b", "text": "Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and will release all software and evaluation code. We summarize important conclusions here: (1) Pose estimation appears roughly solved for scenes with isolated hands. However, methods still struggle to analyze cluttered scenes where hands may be interacting with nearby objects and surfaces. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress.", "title": "" }, { "docid": "92b4a18334345b55aae40b99adcc3840", "text": "Online social networks (OSNs) are becoming increasingly popular and Identity Clone Attacks (ICAs) that aim at creating fake identities for malicious purposes on OSNs are becoming a significantly growing concern. Such attacks severely affect the trust relationships a victim has built with other users if no active protection is applied. In this paper, we first analyze and characterize the behaviors of ICAs. Then we propose a detection framework that is focused on discovering suspicious identities and then validating them. Towards detecting suspicious identities, we propose two approaches based on attribute similarity and similarity of friend networks. The first approach addresses a simpler scenario where mutual friends in friend networks are considered; and the second one captures the scenario where similar friend identities are involved. We also present experimental results to demonstrate flexibility and effectiveness of the proposed approaches. Finally, we discuss some feasible solutions to validate suspicious identities.", "title": "" }, { "docid": "34b3c5ee3ea466c23f5c7662f5ce5b33", "text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.", "title": "" }, { "docid": "c9c9f02f8446c2ce5a20e5793b9b826a", "text": "This paper introduces new algorithms (Fuzzy relative of the CLARANS algorithm FCLARANS and Fuzzy c Medoids based on randomized search FCMRANS) for fuzzy clustering of relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd) in which the within cluster dissimilarity of each cluster is minimized in each iteration by recomputing new medoids given current memberships, FCLARANS minimizes the same objective function minimized by FCMdd by changing current medoids in such away that that the sum of the within cluster dissimilarities is minimized. Computing new medoids may be effected by noise because outliers may join the computation of medoids while the choice of medoids in FCLARANS is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is less sensitive to the presence of outliers. In FCMRANS the step of computing new medoids in FCMdd is modified to be based on randomized search. Furthermore, a new initialization procedure is developed that add randomness to the initialization procedure used with FCMdd. Both FCLARANS and FCMRANS are compared with the robust and linearized version of fuzzy c-medoids (RFCMdd). Experimental results with different samples of the Reuter-21578, Newsgroups (20NG) and generated datasets with noise show that FCLARANS is more robust than both RFCMdd and FCMRANS. Finally, both FCMRANS and FCLARANS are more efficient and their outputs are almost the same as that of RFCMdd in terms of classification rate. Keywords—Data Mining, Fuzzy Clustering, Relational Clustering, Medoid-Based Clustering, Cluster Analysis, Unsupervised Learning.", "title": "" }, { "docid": "8348a89e74707b8e42beb7589e2603b2", "text": "Skin-lightening agents such as kojic acid, arbutin, ellagic acid, lucinol and 5,5′-dipropylbiphenyl-2,2′-diol are used in ‘anti-ageing’ cosmetics. Cases of allergic contact dermatitis caused by these skin-lightening agents have been reported (1, 2). Vitamin C and its derivatives have also been used in cosmetics as skin-lightening agents for a long time. Vitamin C in topical agents is poorly absorbed through the skin, and is easily oxidized after percutaneous absorption. Recently, ascorbic acid derivatives have been developed with enhanced properties. The ascorbic acid derivative 3-o-ethyl-l-ascorbic acid (CAS no. 86404-048, molecular weight 204.18; Fig. 1), also known as vitamin C ethyl, is chemically stable and is more easily absorbed through the skin than the other vitamin C derivatives. Moreover, 3-o-ethyl-l-ascorbic acid has skinlightening properties. Here, we report a case of allergic contact dermatitis caused by a skin-lightening lotion containing 3-o-ethyl-l-ascorbic acid.", "title": "" }, { "docid": "fd0c32b1b4e52f397d0adee5de7e381c", "text": "Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, braincomputer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. ∗The first two authors contributed equally to this work. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly.", "title": "" }, { "docid": "a552f0ee9fafe273859a11f29cf7670d", "text": "A majority of the existing stereo matching algorithms assume that the corresponding color values are similar to each other. However, it is not so in practice as image color values are often affected by various radiometric factors such as illumination direction, illuminant color, and imaging device changes. For this reason, the raw color recorded by a camera should not be relied on completely, and the assumption of color consistency does not hold good between stereo images in real scenes. Therefore, the performance of most conventional stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new stereo matching measure that is insensitive to radiometric variations between left and right images. Unlike most stereo matching measures, we use the color formation model explicitly in our framework and propose a new measure, called the Adaptive Normalized Cross-Correlation (ANCC), for a robust and accurate correspondence measure. The advantage of our method is that it is robust to lighting geometry, illuminant color, and camera parameter changes between left and right images, and does not suffer from the fattening effect unlike conventional Normalized Cross-Correlation (NCC). Experimental results show that our method outperforms other state-of-the-art stereo methods under severely different radiometric conditions between stereo images.", "title": "" }, { "docid": "860459f0a3f22ba8d6b8828251b70179", "text": "This paper proposes a new architecture of a controlled-temperature hot-wire anemometer using voltage feedback linearization. The voltage feedback linearizes the sensor input-output relationship and the controller is designed to achieve null steady-state error and reduce the system response time. Analysis of the behavior of the architecture modeled using Simulink is presented for a NTC sensor. Simulation results are presented and discussed, and the architecture is compared with the classical constant-temperature anemometer (CTA) one.", "title": "" }, { "docid": "788beb721cb4197a036f4ce207fcf36b", "text": "This paper presents the requirements, design criteria and methodology used to develop the design of a new selfcontained prosthetic hand to be used by transradial amputees. The design is based on users’ needs, on authors background and knowledge of the state of the art, and feasible fabrication technology with the aim of replicating as much as possible the functionality of the human hand. The paper focuses on the design approach and methodology which is divided into three steps: (i) the mechanical actuation units, design and actuation distribution; (ii) the mechatronic development and finally (iii) the controller architecture design. The design is presented here and compared with significant commercial devices and research prototypes.", "title": "" }, { "docid": "ed47a1a6c193b6c3699805f5be641555", "text": "Wind power generation differs from conventional thermal generation due to the stochastic nature of wind. Thus wind power forecasting plays a key role in dealing with the challenges of balancing supply and demand in any electricity system, given the uncertainty associated with the wind farm power output. Accurate wind power forecasting reduces the need for additional balancing energy and reserve power to integrate wind power. Wind power forecasting tools enable better dispatch, scheduling and unit commitment of thermal generators, hydro plant and energy storage plant and more competitive market trading as wind power ramps up and down on the grid. This paper presents an in-depth review of the current methods and advances in wind power forecasting and prediction. Firstly, numerical wind prediction methods from global to local scales, ensemble forecasting, upscaling and downscaling processes are discussed. Next the statistical and machine learning approach methods are detailed. Then the techniques used for benchmarking and uncertainty analysis of forecasts are overviewed, and the performance of various approaches over different forecast time horizons is examined. Finally, current research activities, challenges and potential future developments are appraised. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "440e45de4d13e89e3f268efa58f8a51a", "text": "This letter describes the concept, design, and measurement of a low-profile integrated microstrip antenna for dual-band applications. The antenna operates at both the GPS L1 frequency of 1.575 GHz with circular polarization and 5.88 GHz with a vertical linear polarization for dedicated short-range communication (DSRC) application. The antenna is low profile and meets stringent requirements on pattern/polarization performance in both bands. The design procedure is discussed, and full measured data are presented.", "title": "" }, { "docid": "60f2baba7922543e453a3956eb503c05", "text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.", "title": "" } ]
scidocsrr
2c85bc4bbb0df935386e5fbee2fc1b35
Issues of Knowledge Management in the Public Sector
[ { "docid": "636b0dd2a23a87f91b2820d70d687a37", "text": "KNOWLEDGE is neither data nor information, though it is related to both, and the differences between these terms are often a matter of degree. We start with those more familiar terms both because they are more familiar and because we can understand knowledge best with reference to them. Confusion about what data, information, and knowledge are -how they differ, what those words mean -has resulted in enormous expenditures on technology initiatives that rarely deliver what the firms spending the money needed or thought they were getting. Often firms don't understand what they need until they invest heavily in a system that fails to provide it.", "title": "" } ]
[ { "docid": "29378712a9ab9031879c95ee8baad923", "text": "In recent decades, different extensional forms of fuzzy sets have been developed. However, these multitudinous fuzzy sets are unable to deal with quantitative information better. Motivated by fuzzy linguistic approach and hesitant fuzzy sets, the hesitant fuzzy linguistic term set was introduced and it is a more reasonable set to deal with quantitative information. During the process of multiple criteria decision making, it is necessary to propose some aggregation operators to handle hesitant fuzzy linguistic information. In this paper, two aggregation operators for hesitant fuzzy linguistic term sets are introduced, which are the hesitant fuzzy linguistic Bonferroni mean operator and the weighted hesitant fuzzy linguistic Bonferroni mean operator. Correspondingly, several properties of these two aggregation operators are discussed. Finally, a practical case is shown in order to express the application of these two aggregation operators. This case mainly discusses how to choose the best hospital about conducting the whole society resourcemanagement research included in awisdommedical health system. Communicated by V. Loia. B Zeshui Xu xuzeshui@263.net Xunjie Gou gouxunjie@qq.com Huchang Liao liaohuchang@163.com 1 Business School, Sichuan University, Chengdu 610064, China 2 School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China", "title": "" }, { "docid": "a11c2a1522ae4c4df55467d62e4bbc51", "text": "In this paper, a new design method considering a desired workspace and swing range of spherical joints of a DELTA robot is presented. The design is based on a new concept, which is the maximum inscribed workspace proposed in this paper. Firstly, the geometric description of the workspace for a DELTA robot is discussed, especially, the concept of the maximum inscribed workspace for the robot is proposed. The inscribed radius of the workspace on a workspace section is illustrated. As an applying example, a design result of the DELTA robot with a given workspace is presented and the reasonability is checked with the conditioning index. The results of the paper are very useful for the design and application of the parallel robot.", "title": "" }, { "docid": "1513af4802f1a2aaf2ae8fdfae891336", "text": "Computing the pairwise semantic similarity between all words on the Web is a computationally challenging task. Parallelization and optimizations are necessary. We propose a highly scalable implementation based on distributional similarity, implemented in the MapReduce framework and deployed over a 200 billion word crawl of the Web. The pairwise similarity between 500 million terms is computed in 50 hours using 200 quad-core nodes. We apply the learned similarity matrix to the task of automatic set expansion and present a large empirical study to quantify the effect on expansion performance of corpus size, corpus quality, seed composition and seed size. We make public an experimental testbed for set expansion analysis that includes a large collection of diverse entity sets extracted from Wikipedia.", "title": "" }, { "docid": "5f351dc1334f43ce1c80a1e78581d0f9", "text": "Based on keypoints extracted as salient image patches, an image can be described as a \"bag of visual words\" and this representation has been used in scene classification. The choice of dimension, selection, and weighting of visual words in this representation is crucial to the classification performance but has not been thoroughly studied in previous work. Given the analogy between this representation and the bag-of-words representation of text documents, we apply techniques used in text categorization, including term weighting, stop word removal, feature selection, to generate image representations that differ in the dimension, selection, and weighting of visual words. The impact of these representation choices to scene classification is studied through extensive experiments on the TRECVID and PASCAL collection. This study provides an empirical basis for designing visual-word representations that are likely to produce superior classification performance.", "title": "" }, { "docid": "f91bed4a709f52cc2dd74f78f52a24a2", "text": "This review summarizes some of the recent advances in the neurobiology of memory. Current research helps us to understand how memories are created and, conversely, how our memories can be influenced by stress, drugs, and aging. An understanding of how memories are encoded by the brain may also lead to new ideas about how to maximize the long-term retention of important information. There are multiple memory systems with different functions and, in this review, we focus on the conscious recollection of one's experience of events and facts and on memories tied to emotional responses. Memories are also classified according to time: from short-term memory, lasting only seconds or minutes, to long-term memory, lasting months or years. The advent of new functional neuroimaging methods provides an opportunity to gain insight into how the human brain supports memory formation. Each memory system has a distinct anatomical organization, where different parts of the brain are recruited during phases of memory storage. Within the brain, memory is a dynamic property of populations of neurons and their interconnections. Memories are laid down in our brains via chemical changes at the neuron level. An understanding of the neurobiology of memory may stimulate health educators to consider how various teaching methods conform to the process of memory formation.", "title": "" }, { "docid": "d6e61f8a150edb6cee4fc7ae48e7b6f1", "text": "Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also exemplified by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these models is missing. In this paper, we conduct a systematic review of the scientific literature and identify 17 standard BN models in cyber security. We analyse these models based on 8 different criteria and identify important patterns in the use of these models. A key outcome is that standard BNs are noticeably used for problems especially associated with malicious insiders. This study points out the core range of problems that were tackled using standard BN models in cyber security, and illuminates key research gaps.", "title": "" }, { "docid": "49d6b3f314b61ace11afc5eea7b652e3", "text": "Euler diagrams visually represent containment, intersection and exclusion using closed curves. They first appeared several hundred years ago, however, there has been a resurgence in Euler diagram research in the twenty-first century. This was initially driven by their use in visual languages, where they can be used to represent logical expressions diagrammatically. This work lead to the requirement to automatically generate Euler diagrams from an abstract description. The ability to generate diagrams has accelerated their use in information visualization, both in the standard case where multiple grouping of data items inside curves is required and in the area-proportional case where the area of curve intersections is important. As a result, examining the usability of Euler diagrams has become an important aspect of this research. Usability has been investigated by empirical studies, but much research has concentrated on wellformedness, which concerns how curves and other features of the diagram interrelate. This work has revealed the drawability of Euler diagrams under various wellformedness properties and has developed embedding methods that meet these properties. Euler diagram research surveyed in this paper includes theoretical results, generation techniques, transformation methods and the development of automated reasoning systems for Euler diagrams. It also overviews application areas and the ways in which Euler diagrams have been extended.", "title": "" }, { "docid": "923b4025d22bc146c53fb4c90f43ef72", "text": "In this paper we describe preliminary approaches for contentbased recommendation of Pinterest boards to users. We describe our representation and features for Pinterest boards and users, together with a supervised recommendation model. We observe that features based on latent topics lead to better performance than features based on userassigned Pinterest categories. We also find that using social signals (repins, likes, etc.) can improve recommendation quality.", "title": "" }, { "docid": "3e50e94f865425d3fa281f2a818c2806", "text": "For a reference on elliptic curves and their cryptographic applications, see: • Alfred J. Menezes, Elliptic curve public key cryptosystems, 1993. • Joseph H. Silverman and John Tate, Rational points on elliptic curves, 1992. (An undergraduate mathematics text on elliptic curves.) • J.W.S. Cassels, Lectures on elliptic curves, 1991. (Informal and mathematical.) An elliptic curve is not an ellipse! An ellipse is a degree 2 equation of the form x + ay = b. (However, given such an ellipse, you could try to compute the arc length of a certain portion of the curve; the integral which arises can be associated to an elliptic curve.)", "title": "" }, { "docid": "7fc4ee7e92d5139ed78c6e79ca9ac425", "text": "In this paper, we proposed an integrated model of both semantic-aware and contrast-aware saliency (SCA) combining both bottom-up and top-down cues for effective eye fixation prediction. The proposed SCA model contains two pathways. The first pathway is a deep neural network customized for semantic-aware saliency, which aims to capture the semantic information in images, especially for the presence of meaningful objects and object parts. The second pathway is based on on-line feature learning and information maximization, which learns an adaptive representation for the input and discovers the high contrast salient patterns within the image context. The two pathways characterize both long-term and short-term attention cues and are integrated using maxima normalization. Experimental results on artificial images and several benchmark dataset demonstrate the superior performance and better plausibility of the proposed model over both classic approaches and recent deep models.", "title": "" }, { "docid": "9c656210ae6819db391872ee56f5a7ef", "text": "We investigate how adversarial learning may be used for various animation tasks related to human motion synthesis. We propose a learning framework that we decline for building various models corresponding to various needs: a random synthesis generator that randomly produces realistic motion capture trajectories; conditional variants that allow controlling the synthesis by providing high-level features that the animation should match; a style transfer model that allows transforming an existing animation in the style of another one. Our work is built on the adversarial learning strategy that has been proposed in the machine learning field very recently (2014) for learning accurate generative models on complex data, and that has been shown to provide impressive results, mainly on image data. We report both objective and subjective evaluation results on motion capture data performed under emotion, the Emilya Dataset. Our results show the potential of our proposals for building models for a variety of motion synthesis tasks.", "title": "" }, { "docid": "db31a02d996b0a36d0bf215b7b7e8354", "text": "This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed and recognize the most contributing and important frequency signatures at different levels of task familiarity.", "title": "" }, { "docid": "6ac9ddefaeaddad00fb3d85b94b07f74", "text": "Cognitive architectures are theories of cognition that try to capture the essential representations and mechanisms that underlie cognition. Research in cognitive architectures has gradually moved from a focus on the functional capabilities of architectures to the ability to model the details of human behavior, and, more recently, brain activity. Although there are many different architectures, they share many identical or similar mechanisms, permitting possible future convergence. In judging the quality of a particular cognitive model, it is pertinent to not just judge its fit to the experimental data but also its simplicity and ability to make predictions.", "title": "" }, { "docid": "dcec6ef9e08d7bcfa86aca8d045b6bd4", "text": "This article examines the intellectual and institutional factors that contributed to the collaboration of neuropsychiatrist Warren McCulloch and mathematician Walter Pitts on the logic of neural networks, which culminated in their 1943 publication, \"A Logical Calculus of the Ideas Immanent in Nervous Activity.\" Historians and scientists alike often refer to the McCulloch-Pitts paper as a landmark event in the history of cybernetics, and fundamental to the development of cognitive science and artificial intelligence. This article seeks to bring some historical context to the McCulloch-Pitts collaboration itself, namely, their intellectual and scientific orientations and backgrounds, the key concepts that contributed to their paper, and the institutional context in which their collaboration was made. Although they were almost a generation apart and had dissimilar scientific backgrounds, McCulloch and Pitts had similar intellectual concerns, simultaneously motivated by issues in philosophy, neurology, and mathematics. This article demonstrates how these issues converged and found resonance in their model of neural networks. By examining the intellectual backgrounds of McCulloch and Pitts as individuals, it will be shown that besides being an important event in the history of cybernetics proper, the McCulloch-Pitts collaboration was an important result of early twentieth-century efforts to apply mathematics to neurological phenomena.", "title": "" }, { "docid": "af82ea560b98535f3726be82a2d23536", "text": "Influence Maximization is an extensively-studied problem that targets at selecting a set of initial seed nodes in the Online Social Networks (OSNs) to spread the influence as widely as possible. However, it remains an open challenge to design fast and accurate algorithms to find solutions in large-scale OSNs. Prior Monte-Carlo-simulation-based methods are slow and not scalable, while other heuristic algorithms do not have any theoretical guarantee and they have been shown to produce poor solutions for quite some cases. In this paper, we propose hop-based algorithms that can easily scale to millions of nodes and billions of edges. Unlike previous heuristics, our proposed hop-based approaches can provide certain theoretical guarantees. Experimental evaluations with real OSN datasets demonstrate the efficiency and effectiveness of our algorithms.", "title": "" }, { "docid": "99dc118b4e0754bd8a57bdde63243242", "text": "We present a fully implicit Eulerian technique for simulating free surface viscous liquids which eliminates artifacts in previous approaches, efficiently supports variable viscosity, and allows the simulation of more compelling viscous behaviour than previously achieved in graphics. Our method exploits a variational principle which automatically enforces the complex boundary condition on the shear stress at the free surface, while giving rise to a simple discretization with a symmetric positive definite linear system. We demonstrate examples of our technique capturing realistic buckling, folding and coiling behavior. In addition, we explain how to handle domains whose boundary comprises both ghost fluid Dirichlet and variational Neumann parts, allowing correct behaviour at free surfaces and solid walls for both our viscous solve and the variational pressure projection of Batty et al. [BBB07].", "title": "" }, { "docid": "ebf707fb477def2a79b8fc30db542006", "text": "Wireless sensor networks (WSNs) have recently gained the attention of researchers in many challenging aspects. The most important challenge in these networks is energy conservation. One of the most popular solutions in making WSNs energy-efficient is to cluster the networks. In clustering, the nodes are divided into some clusters and then some nodes, called cluster-heads, are selected to be the head of each cluster. In a typical clustered WSN, the regular nodes sense the field and send their data to the cluster-head, then, after gathering and aggregating the data, the cluster-head transmits them to the base station. Clustering the nodes in WSNs has many benefits, including scalability, energy-efficiency, and reducing routing delay. In this paper we present a state-of-the-art and comprehensive survey on clustering approaches. We first begin with the objectives of clustering, clustering characteristics, and then present a classification on the clustering algorithms in WSNs. Some of the clustering objectives considered in this paper include scalability, faulttolerance, data aggregation/fusion, increased connectivity, load balancing, and collision avoidance. Then, we survey the proposed approaches in the past few years in a classified manner and compare them based on different metrics such as mobility, cluster count, cluster size, and algorithm complexity. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1738a8ccb1860e5b85e2364f437d4058", "text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.", "title": "" }, { "docid": "107c839a73c12606d4106af7dc04cd96", "text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.", "title": "" } ]
scidocsrr
e20d21036338bf07bd6fac8e2fde564a
A Multi-task Convolutional Neural Network for Joint Iris Detection and Presentation Attack Detection
[ { "docid": "6b58f303ac9098437851df9af229169e", "text": "Recently, spoof detection has become an important and challenging topic in iris recognition. Based on the textural differences between the counterfeit iris images and the live iris images, we propose an efficient method to tackle this problem. Firstly, the normalized iris image is divided into sub-regions according to the properties of iris textures. Local binary patterns (LBP) are then adopted for texture representation of each sub-region. Finally, Adaboost learning is performed to select the most discriminative LBP features for spoof detection. In particular, a kernel density estimation scheme is proposed to complement the insufficiency of counterfeit iris images during Adaboost training. The comparison experiments indicate that the proposed method outperforms state-of-the-art methods in both accuracy and speed.", "title": "" } ]
[ { "docid": "dc5935268e556de67de685922338a895", "text": "Graphs represent general node-link diagrams and have long been utilized in scientific visualization for data organization and management. However, using graphs as a visual representation and interface for navigating and exploring scientific data sets has a much shorter history yet the amount of work along this direction is clearly on the rise in recent years. In this paper, we take a holistic perspective and survey graph-based representations and techniques for scientific visualization. Specifically, we classify these representations and techniques into four categories, namely, partition-wise, relationship-wise, structure-wise, and provenance-wise. We survey related publications in each category, explaining the roles of graphs in related work and highlighting their similarities and differences. At the end, we reexamine these related publications following the graph-based visualization pipeline. We also point out research trends and remaining challenges in graph-based representations and techniques for scientific visualization.", "title": "" }, { "docid": "f70ff7f71ff2424fbcfea69d63a19de0", "text": "We propose a method for learning similaritypreserving hash functions that map highdimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "title": "" }, { "docid": "b37db75dcd62cc56977d1a28a81be33e", "text": "In this article we report on a new digital interactive self-report method for the measurement of human affect. The AffectButton (Broekens & Brinkman, 2009) is a button that enables users to provide affective feedback in terms of values on the well-known three affective dimensions of Pleasure (Valence), Arousal and Dominance. The AffectButton is an interface component that functions and looks like a medium-sized button. The button presents one dynamically changing iconic facial expression that changes based on the coordinates of the user’s pointer in the button. To give affective feedback the user selects the most appropriate expression by clicking the button, effectively enabling 1-click affective self-report on 3 affective dimensions. Here we analyze 5 previously published studies, and 3 novel large-scale studies (n=325, n=202, n=128). Our results show the reliability, validity, and usability of the button for acquiring three types of affective feedback in various domains. The tested domains are holiday preferences, real-time music annotation, emotion words, and textual situation descriptions (ANET). The types of affective feedback tested are preferences, affect attribution to the previously mentioned stimuli, and self-reported mood. All of the subjects tested were Dutch and aged between 15 and 56 years. We end this article with a discussion of the limitations of the AffectButton and of its relevance to areas including recommender systems, preference elicitation, social computing, online surveys, coaching and tutoring, experimental psychology and psychometrics, content annotation, and game consoles.", "title": "" }, { "docid": "06957a02343acdfcf544f83d0b6f3c4b", "text": "—Music influences the growth of plants and can either promote or restrict the growth of plants (depending on the type of music being played). The present experiment is aimed to study the effect of music on 30 Rose (Rosa chinensis) plants taken in separate pots. The plants were divided into five groups and each group was subjected to one of the following types of music, Indian Classical music, Vedic chants, Western Classical music, and Rock music while one group was kept in silence as the control group. The elongation of shoot, internode elongation, the number of flowers and the diameter of the flowers were recorded and changed studied over a period of 60 days. Significant differences have been noted. It was seen that the plants exposed to Vedic chants showed the maximum elongation of shoot, maximum number of flowers and highest diameter of flowers. The internode elongation was highest in plants exposed to Indian classical music. This clearly shows that the subjecting the plants to Vedic chants or Indian classical music promotes the growth of plants as compared to the control group or subjecting them to Western classical or Rock music.", "title": "" }, { "docid": "a27d955a673d4a0f7fc45d83c1ed9377", "text": "Manifold Ranking (MR), a graph-based ranking algorithm, has been widely applied in information retrieval and shown to have excellent performance and feasibility on a variety of data types. Particularly, it has been successfully applied to content-based image retrieval, because of its outstanding ability to discover underlying geometrical structure of the given image database. However, manifold ranking is computationally very expensive, both in graph construction and ranking computation stages, which significantly limits its applicability to very large data sets. In this paper, we extend the original manifold ranking algorithm and propose a new framework named Efficient Manifold Ranking (EMR). We aim to address the shortcomings of MR from two perspectives: scalable graph construction and efficient computation. Specifically, we build an anchor graph on the data set instead of the traditional k-nearest neighbor graph, and design a new form of adjacency matrix utilized to speed up the ranking computation. The experimental results on a real world image database demonstrate the effectiveness and efficiency of our proposed method. With a comparable performance to the original manifold ranking, our method significantly reduces the computational time, makes it a promising method to large scale real world retrieval problems.", "title": "" }, { "docid": "4acc30bade98c1257ab0a904f3695f3d", "text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.", "title": "" }, { "docid": "1be971362e43b07184e04ab249f79ec6", "text": "Purpose – The purpose of this study is to develop a framework for evaluating business-IT alignment. Specifically, the authors emphasize internal business-IT alignment between business and IS groups, which is a typical setting in recent boundary-less, networked business environments. Design/methodology/approach – Based on the previous studies, a socio-technical approach was developed to explain how the functional integration in the business-IT alignment process could be accomplished in collaborative environments. The study investigates the relationship among social alignment, technical alignment, IS effectiveness, and business performance. Findings – The results indicated that alignment between business and IS groups increased IS effectiveness and business performance. Business-IT alignment resulting from socio-technical arrangements in firms’ infrastructure has positive impacts on business performance. Research limitations/implications – This study is limited by control issues in terms of the impact of the confounding variables on business performance. Future studies need to validate the research model across industries. The study results imply that business-IT alignment is a multidimensional concept that includes social and technical activities explaining the way people and information technology institutionalize business value. Originality/value – By establishing a socio-technical framework of business-IT alignment, this study proposes a conceptual framework for business-IT alignment that accounts for not only improved technical performance, but also improved human performance as well. This study emphasizes the importance of addressing internal socio-technical collaboration in modern business environments.", "title": "" }, { "docid": "a8322b6f508e677eb1b28c4ced3b7869", "text": "The use of repeated expressions to establish coreference allows an investigation of the relationship between basic processes of word recognition and higher level language processes that involve the integration of information into a discourse model. In two experiments on reading, we used eye tracking and event-related potentials to examine whether repeated expressions that are coreferential within a local discourse context show the kind of repetition priming that is shown in lists of words. In both experiments, the effects of lexical repetition were modulated by the effects of local discourse context that arose from manipulations of the linguistic prominence of the antecedent of a coreferentially repeated name. These results are interpreted within the context of discourse prominence theory, which suggests that processes of coreferential interpretation interact with basic mechanisms of memory integration during the construction of a model of discourse.", "title": "" }, { "docid": "41b06d265fa3393fe6b1ab8cd0f13b73", "text": "A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we \"see” details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.", "title": "" }, { "docid": "bd02a00a6021edfa60edf8f5616ff5df", "text": "The transition from product-centric to service-centric business models presents a major challenge to industrial automation and manufacturing systems. This transition increases Machine-to-Machine connectivity among industrial devices, industrial controls systems, and factory floor devices. While initiatives like Industry 4.0 or the Industrial Internet Consortium motivate this transition, the emergence of the Internet of Things and Cyber Physical Systems are key enablers. However, automated and autonomous processes require trust in the communication entities and transferred data. Therefore, we study how to secure a smart service use case for industrial maintenance scenarios. In this use case, equipment needs to securely transmit its status information to local and remote recipients. We investigate and compare two security technologies that provide isolation and a secured execution environment: ARM TrustZone and a Security Controller. To compare these technologies we design and implement a device snapshot authentication system. Our results indicate that the TrustZone based approach promises greater flexibility and performance, but only the Security Controller strongly protects against physical attacks. We argue that the best technology actually depends on the use case and propose a hybrid approach that maximizes security for high-security industrial applications. We believe that the insights we gained will help introducing advanced security mechanisms into the future Industrial Internet of Things.", "title": "" }, { "docid": "2893a60090a15e2c913ae37e976c2bff", "text": "We propose Precoded SUbcarrier Nulling (PSUN), a transmission strategy for OFDM-based wireless communication networks (SCN, Secondary Communication Networks) that need to coexist with pulsed radar systems. It is a novel null-tone allocation method that effectively mitigates inter-carrier interference (ICI) remaining after pulse blanking (PB). When the power from the radar's pulse interference is high, the SCN Rx needs to employ PB to mitigate the interference power. Although PB is known to be an effective technique for suppressing pulsed interference, it magnifies the effect of ICI in OFDM waveforms, and thus degrades bit error rate (BER) performance. For more reliable performance evaluation, we take into account two characteristics of the incumbent radar significantly affect the performance of SCN: (i) antenna sidelobe and (ii) out-of-band emission. Our results show that PSUN effectively mitigates the impact of ICI remaining after PB.", "title": "" }, { "docid": "57c0f9c629e4fdcbb0a4ca2d4f93322f", "text": "Chronic exertional compartment syndrome and medial tibial stress syndrome are uncommon conditions that affect long-distance runners or players involved in team sports that require extensive running. We report 2 cases of bilateral chronic exertional compartment syndrome, with medial tibial stress syndrome in identical twins diagnosed with the use of a Kodiag monitor (B. Braun Medical, Sheffield, United Kingdom) fulfilling the modified diagnostic criteria for chronic exertional compartment syndrome as described by Pedowitz et al, which includes: (1) pre-exercise compartment pressure level >15 mm Hg; (2) 1 minute post-exercise pressure >30 mm Hg; and (3) 5 minutes post-exercise pressure >20 mm Hg in the presence of clinical features. Both patients were treated with bilateral anterior fasciotomies through minimal incision and deep posterior fasciotomies with tibial periosteal stripping performed through longer anteromedial incisions under direct vision followed by intensive physiotherapy resulting in complete symptomatic recovery. The etiology of chronic exertional compartment syndrome is not fully understood, but it is postulated abnormal increases in intramuscular pressure during exercise impair local perfusion, causing ischemic muscle pain. No familial predisposition has been reported to date. However, some authors have found that no significant difference exists in the relative perfusion, in patients, diagnosed with chronic exertional compartment syndrome. Magnetic resonance images of affected compartments have indicated that the pain is not due to ischemia, but rather from a disproportionate oxygen supply versus demand. We believe this is the first report of chronic exertional compartment syndrome with medial tibial stress syndrome in twins, raising the question of whether there is a genetic predisposition to the causation of these conditions.", "title": "" }, { "docid": "a204be50d370c494dcc7523cd9ed7740", "text": "Computational thinking (CT) draws on concepts and practices that are fundamental to computing and computer science. It includes epistemic and representational practices, such as problem representation, abstraction, decomposition, simulation, verification, and prediction. However, these practices are also central to the development of expertise in scientific and mathematical disciplines. Recently, arguments have been made in favour of integrating CT and programming into the K-12 STEM curricula. In this paper, we first present a theoretical investigation of key issues that need to be considered for integrating CT into K-12 science topics by identifying the synergies between CT and scientific expertise using a particular genre of computation: agent-based computation. We then present a critical review of the literature in educational computing, and propose a set of guidelines for designing learning environments on science topics that can jointly foster the development of computational thinking with scientific expertise. This is followed by the description of a learning environment that supports CT through modeling and simulation to help middle school students learn physics and biology. We demonstrate the effectiveness of our system by discussing the results of a small study conducted in a middle school science classroom. Finally, we discuss the implications of our work for future research on developing CT-based science learning environments.", "title": "" }, { "docid": "3a43fa89802a9270cd5ac69a454b533b", "text": "In this paper, we report typical soft breakdown and BVDSS walk-in/walk-out phenomena observed in the development of ON semiconductor's T8 60V trench power MOSFET. These breakdown behaviors show strong correlation with doping profile. We propose two 1D location-dependent variables, Qint(y) and Cave(y), to assist the study, and demonstrate the effectiveness of them in revealing hidden information behind regular SIMS data. Our study details the methodology of engineering doping profiles for improved breakdown stability.", "title": "" }, { "docid": "1d41e6f55521cdba4fc73febd09d2eb4", "text": "1.", "title": "" }, { "docid": "1eae35badf1dd47462ce03a60db89e05", "text": "Convolutional Neural Network(CNN) based semantic segmentation require extensive pixel level manual annotation which is daunting for large microscopic images. The paper is aimed towards mitigating this labeling effort by leveraging the recent concept of generative adversarial network(GAN) wherein a generator maps latent noise space to realistic images while a discriminator differentiates between samples drawn from database and generator. We extend this concept to a multi task learning wherein a discriminator-classifier network differentiates between fake/real examples and also assigns correct class labels. Though our concept is generic, we applied it for the challenging task of vessel segmentation in fundus images. We show that proposed method is more data efficient than a CNN. Specifically, with 150K, 30K and 15K training examples, proposed method achieves mean AUC of 0.962, 0.945 and 0.931 respectively, whereas the simple CNN achieves AUC of 0.960, 0.921 and 0.916 respectively.", "title": "" }, { "docid": "58c238443e7fbe7043cfa4c67b28dbb2", "text": "In the fall of 2013, we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera platform and a flipped classroom instruction model. As the goal of offering this course was to experiment with this type of instruction, we performed extensive evaluation including surveys of demographics, self-assessed skills, and learning intent; we also designed a knowledge-assessment tool specifically for the subject matter in this course, administering it before and after the course to measure learning, and again 5 months later to measure retention. We also tracked students through the course, including separating out students enrolled for credit from those enrolled only for the free, open course.\n Students had significant knowledge gains across all levels of prior knowledge and across all demographic categories. The main predictor of knowledge gain was effort expended in the course. Students also had significant knowledge retention after the course. Both of these results are limited to the sample of students who chose to complete our knowledge tests. Student completion of the course was hard to predict, with few factors contributing predictive power; the main predictor of completion was intent to complete. Students who chose a concepts-only track with hand exercises achieved the same level of knowledge of recommender systems concepts as those who chose a programming track and its added assignments, though the programming students gained additional programming knowledge. Based on the limited data we were able to gather, face-to-face students performed as well as the online-only students or better; they preferred this format to traditional lecture for reasons ranging from pure convenience to the desire to watch videos at a different pace (slower for English language learners; faster for some native English speakers). This article also includes our qualitative observations, lessons learned, and future directions.", "title": "" }, { "docid": "48ce635355fbb5ffb7d6166948b4f135", "text": "Computational generation of literary artifacts very often resorts to template-like schemas that can be instantiated into complex structures. With this view in mind, the present paper reviews a number of existing attempts to provide an elementary set of patterns for basic plots. An attempt is made to formulate these descriptions of possible plots in terms of character functions, an abstraction of plot-bearing elements of a story originally formulated by Vladimir Propp. These character functions act as the building blocks of the Propper system, an existing framework for computational story generation. The paper explores the set of extensions required to the original set of character functions to allow for a basic representation of the analysed schemata, and a solution for automatic generation of stories based on this formulation of the narrative schemas. This solution uncovers important insights on the relative expressive power of the representation of narrative in terms of character functions, and their impact on the generative potential of the framework is discussed. 1998 ACM Subject Classification F.4.1 Knowledge Representation Formalisms and Methods", "title": "" }, { "docid": "5f01cb5c34ac9182f6485f70d19101db", "text": "Gastroeophageal reflux is a condition in which the acidified liquid content of the stomach backs up into the esophagus. The antiacid magaldrate and prokinetic domperidone are two drugs clinically used for the treatment of gastroesophageal reflux symptoms. However, the evidence of a superior effectiveness of this combination in comparison with individual drugs is lacking. A double-blind, randomized and comparative clinical trial study was designed to characterize the efficacy and safety of a fixed dose combination of magaldrate (800 mg)/domperidone (10 mg) against domperidone alone (10 mg), in patients with gastroesophageal reflux symptoms. One hundred patients with gastroesophageal reflux diagnosed by Carlsson scale were randomized to receive a chewable tablet of a fixed dose of magaldrate/domperidone combination or domperidone alone four times each day during a month. Magaldrate/domperidone combination showed a superior efficacy to decrease global esophageal (pyrosis, regurgitation, dysphagia, hiccup, gastroparesis, sialorrhea, globus pharyngeus and nausea) and extraesophageal (chronic cough, hoarseness, asthmatiform syndrome, laryngitis, pharyngitis, halitosis and chest pain) reflux symptoms than domperidone alone. In addition, magaldrate/domperidone combination improved in a statistically manner the quality of life of patients with gastroesophageal reflux respect to monotherapy, and more patients perceived the combination as a better treatment. Both treatments were well tolerated. Data suggest that oral magaldrate/domperidone mixture could be a better option in the treatment of gastroesophageal reflux symptoms than only domperidone.", "title": "" }, { "docid": "05c93457d8b90fe61fb9865268776656", "text": "Class evolution, the phenomenon of class emergence and disappearance, is an important research topic for data stream mining. All previous studies implicitly regard class evolution as a transient change, which is not true for many real-world problems. This paper concerns the scenario where classes emerge or disappear gradually. A class-based ensemble approach, namely Class-Based ensemble for Class Evolution (CBCE), is proposed. By maintaining a base learner for each class and dynamically updating the base learners with new data, CBCE can rapidly adjust to class evolution. A novel under-sampling method for the base learners is also proposed to handle the dynamic class-imbalance problem caused by the gradual evolution of classes. Empirical studies demonstrate the effectiveness of CBCE in various class evolution scenarios in comparison to existing class evolution adaptation methods.", "title": "" } ]
scidocsrr
8e25d13269644e4480c5ef9c1a1bca5b
A 60 GHz Horizontally Polarized Magnetoelectric Dipole Antenna Array With 2-D Multibeam Endfire Radiation
[ { "docid": "813f499c7140f882b077be51e99a8ef6", "text": "This article discusses the challenges, benefits and approaches associated with realizing largescale antenna arrays at mmWave frequency bands for future 5G cellular devices. Key design considerations are investigated to deduce a novel and practical phased array antenna solution operating at 28 GHz with near spherical coverage. The approach is further evolved into a first-of- a-kind cellular phone prototype equipped with mmWave 5G antenna arrays consisting of a total of 32 low-profile antenna elements. Indoor measurements are carried out using the presented prototype to characterize the proposed mmWave antenna system using 16-QAM modulated signals with 27.925 GHz carrier frequency. The biological implications due to the absorbed electromagnetic waves when using mmWave cellular devices are studied and compared in detail with those of 3/4G cellular devices.", "title": "" } ]
[ { "docid": "8161fe7f62ecaddcd8e4eb8ef38aefb1", "text": "With AR-CANVAS we introduce the notion of the augmented reality canvas for information visualization. This is beyond the traditional empty (white), rectangular, and flat-dimensional canvas seen in traditional information visualization. Instead, the AR-CANVAS describes the part of a viewer’s field-of-view where information visualization is rendered in-situ with respect to visible and potentially invisible real-world objects. The visual and spatial complexity of the canvas requires rethinking how to design visualizations for augmented reality. Based on an example of a library exploration scenario, we describe the essential aspects of the AR-CANVAS as well as dimensions for visualization design. We conclude with a brief discussion of challenges in designing visualizations into such a canvas.", "title": "" }, { "docid": "a162277bc8e10484211ff4a4dee116e6", "text": "BACKGROUND\nHunter syndrome (mucopolysaccharidosis type II (MPS II)) is a rare metabolic disease that can severely compromise health, well-being and life expectancy. Little evidence has been published on the impact of MPS II on health-related quality of life (HRQL). The objective of this study was to describe this impact using the Hunter Syndrome-Functional Outcomes for Clinical Understanding Scale (HS-FOCUS) questionnaire and a range of standard validated questionnaires previously used in paediatric populations.\n\n\nMETHODS\nClinical and demographic characteristics collected in a clinical trial and responses to four HRQL questionnaires completed both by patients and parents prior to enzyme replacement treatment were used. The association between questionnaire scores and clinical function parameters were tested using Spearman rank-order correlations. Results were compared to scores in other paediatric populations with chronic conditions obtained through a targeted literature search of published studies.\n\n\nRESULTS\nOverall, 96 male patients with MPS II and their parents were enrolled in the trial. All parents completed the questionnaires and 53 patients above 12 years old also completed the self-reported versions. Parents' and patients' responses were analysed separately and results were very similar. Dysfunction according to the HS-FOCUS and the CHAQ was most pronounced in the physical function domains. Very low scores were reported in the Self Esteem and Family Cohesion domains in the CHQ and HUI3 disutility values indicated a moderate impact. Scores reported by patients and their parents were consistently lower than scores in the other paediatric populations identified (except the parent-reported Behaviour score); and considerably lower than normative values.\n\n\nCONCLUSIONS\nThis study describes the impact on HRQL in patients with MPS II and provides a broader context by comparing it with that of other chronic paediatric diseases. Physical function and the ability to perform day-to-day activities were the most affected areas and a considerable impact on the psychological aspects of patients' HRQL was also found, with a higher level of impairment across most dimensions (particularly Pain and Self Esteem) than that of other paediatric populations. Such humanistic data provide increasingly important support for establishing priorities for health care spending, and as a component of health economic analysis.", "title": "" }, { "docid": "833d5f05513f6815dc113b36a22714de", "text": "Stress shielding of the periprosthetic femur following total hip arthroplasty is a problem that can promote the premature loosening of femoral stems. In order to reduce the need for revision surgery it is thought that more flexible implant designs need to be considered. In this work, the mechanical properties of laser melted square pore cobalt chrome molybdenum cellular structures have been incorporated into the design of a traditional monoblock femoral stem. The influence of incorporating the properties of cellular structures on the load transfer to the periprosthetic femur was investigated using a three dimensional finite element model. Eleven different stiffness configurations were investigated by using fully porous and functionally graded approaches. This investigation confirms that the periprosthetic stress values depend on the stiffness configuration of the stem. The numerical results showed that stress shielding is reduced in the periprosthetic Gruen zones when the mechanical properties of cobalt chrome molybdenum cellular structures are used. This work identifies that monoblock femoral stems manufactured using a laser melting process, which are designed for reduced stiffness, have the potential to contribute towards reducing stress shielding.", "title": "" }, { "docid": "d2c202e120fecf444e77b08bd929e296", "text": "Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker interpolation by adding a new output layer (α-layer) on top of the multi-output branches. An identifying code is injected into the layer together with acoustic features of many speakers. Experiments show that the α-layer can effectively learn to interpolate the acoustic features between speakers.", "title": "" }, { "docid": "d521b14ee04dbf69656240ef47c3319c", "text": "This paper presents a computationally efficient approach for temporal action detection in untrimmed videos that outperforms state-of-the-art methods by a large margin. We exploit the temporal structure of actions by modeling an action as a sequence of sub-actions. A novel and fully automatic sub-action discovery algorithm is proposed, where the number of sub-actions for each action as well as their types are automatically determined from the training videos. We find that the discovered sub-actions are semantically meaningful. To localize an action, an objective function combining appearance, duration and temporal structure of sub-actions is optimized as a shortest path problem in a network flow formulation. A significant benefit of the proposed approach is that it enables real-time action localization (40 fps) in untrimmed videos. We demonstrate state-of-the-art results on THUMOS’14 and MEXaction2 datasets.", "title": "" }, { "docid": "65271fcf27d43ef88910e0a872eec0b9", "text": "Purpose – The purpose of this paper is to investige whether online environment cues (web site quality and web site brand) affect customer purchase intention towards an online retailer and whether this impact is mediated by customer trust and perceived risk. The study also aimed to assess the degree of reciprocity between consumers’ trust and perceived risk in the context of an online shopping environment. Design/methodology/approach – The study proposed a research framework for testing the relationships among the constructs based on the stimulus-organism-response framework. In addition, this study developed a non-recursive model. After the validation of measurement scales, empirical analyses were performed using structural equation modelling. Findings – The findings confirm that web site quality and web site brand affect consumers’ trust and perceived risk, and in turn, consumer purchase intention. Notably, this study finds that the web site brand is a more important cue than web site quality in influencing customers’ purchase intention. Furthermore, the study reveals that the relationship between trust and perceived risk is reciprocal. Research limitations/implications – This study adopted four dimensions – technical adequacy, content quality, specific content and appearance – to measure web site quality. However, there are still many competing concepts regarding the measurement of web site quality. Further studies using other dimensional measures may be needed to verify the research model. Practical implications – Online retailers should focus their marketing strategies more on establishing the brand of the web site rather than improving the functionality of the web site. Originality/value – This study proposed a non-recursive model for empirically analysing the link between web site quality, web site brand, trust, perceived risk and purchase intention towards the online retailer.", "title": "" }, { "docid": "8c40aee3f707d0bf2a3b21d49d175f2d", "text": "We describe a technique for choosing multiple colours for use during data visualization. Our goal is a systematic method for maximizing the total number of colours available for use, while still allowing an observer to rapidly and accurately search a display for any one of the given colours. Previous research suggests that we need to consider three separate effects during colour selection: colour distance, linear separation, and colour category. We describe a simple method for measuring and controlling all of these effects. Our method was tested by performing a set of target identification studies; we analysed the ability of thirty eight observers to find a colour target in displays that contained differently coloured background elements. Results showed our method can be used to select a group of colours that will provide good differentiation between data elements during data visualization.", "title": "" }, { "docid": "37069924c8c645b6d7014928113398d9", "text": "A broad range of approaches to semantic document retrieval has been developed in the context of the Semantic Web. This survey builds bridges among them. We introduce a classification scheme for semantic search engines and clarify terminology. We present an overview of ten selected approaches and compare them by means of our classification criteria. Based on this comparison, we identify not only common concepts and outstanding features, but also open issues. Finally, we give directions for future application development and research.", "title": "" }, { "docid": "a60df3040ff1e2d7ac0ef898c3d3671e", "text": "Recommender Systems have been around for more than a decade now. Choosing what book to read next has always been a question for many. Even for students, deciding which textbook or reference book to read on a topic unknown to them is a big question. In this paper, we try to present a model for a web-based personalized hybrid book recommender system which exploits varied aspects of giving recommendations apart from the regular collaborative and content-based filtering approaches. Temporal aspects for the recommendations are incorporated. Also for users of different age, gender and country, personalized recommendations can be made on these demographic parameters. Scraping information from the web and using the information obtained from this process can be equally useful in making recommendations.", "title": "" }, { "docid": "7c87ec9ac7e5170e0ddaccadf992ea3f", "text": "Social computational systems emerge in the wild on popular social networking sites like Facebook and Twitter, but there remains confusion about the relationship between social interactions and the technical traces of interaction left behind through use. Twitter interactions and social experience are particularly challenging to make sense of because of the wide range of tools used to access Twitter (text message, website, iPhone, TweetDeck and others), and the emergent set of practices for annotating message context (hashtags, reply to's and direct messaging). Further, Twitter is used as a back channel of communication in a wide range of contexts, ranging from disaster relief to watching television. Our study examines Twitter as a transport protocol that is used differently in different socio-technical contexts, and presents an analysis of how researchers might begin to approach studies of Twitter interactions with a more reflexive stance toward the application programming interfaces (APIs) Twitter provides. We conduct a careful review of existing literature examining socio-technical phenomena on Twitter, revealing a collective inconsistency in the description of data gathering and analysis methods. In this paper, we present a candidate architecture and methodological approach for examining specific parts of the Twittersphere. Our contribution begins a discussion among social media researchers on the topic of how to systematically and consistently make sense of the social phenomena that emerge through Twitter. This work supports the comparative analysis of Twitter studies and the development of social media theories.", "title": "" }, { "docid": "66cde02bdf134923ca7ef3ec5c4f0fb8", "text": "In this paper a method for holographic localization of passive UHF-RFID transponders is presented. It is shown how persons or devices that are equipped with a RFID reader and that are moving along a trajectory can be enabled to locate tagged objects reliably. The localization method is based on phase values sampled from a synthetic aperture by a RFID reader. The calculated holographic image is a spatial probability density function that reveals the actual RFID tag position. Experimental results are presented which show that the holographically measured positions are in good agreement with the real position of the tag. Additional simulations have been carried out to investigate the positioning accuracy of the proposed method depending on different distortion parameters and measuring conditions. The effect of antenna phase center displacement is briefly discussed and measurements are shown that quantify the influence on the phase measurement.", "title": "" }, { "docid": "454b26034405902348595617ea433700", "text": "The practice of epidemiology requires asking causal questions. Formal frameworks for causal inference developed over the past decades have the potential to improve the rigor of this process. However, the appropriate role for formal causal thinking in applied epidemiology remains a matter of debate. We argue that a formal causal framework can help in designing a statistical analysis that comes as close as possible to answering the motivating causal question, while making clear what assumptions are required to endow the resulting estimates with a causal interpretation. A systematic approach for the integration of causal modeling with statistical estimation is presented. We highlight some common points of confusion that occur when causal modeling techniques are applied in practice and provide a broad overview on the types of questions that a causal framework can help to address. Our aims are to argue for the utility of formal causal thinking, to clarify what causal models can and cannot do, and to provide an accessible introduction to the flexible and powerful tools provided by causal models.", "title": "" }, { "docid": "a11f1155f3a9805f7c17284c99eed109", "text": "This paper presents the architecture and design of a high-performance asynchronous Huffman decoder for compressed-code embedded processors. In such processors, embedded programs are stored in compressed form in instruction ROM, then are decompressed on demand during instruction cache refill. The Huffman decoder is used as a code decompression engine. The circuit is non-pipelined, and is implemented as an iterative self-timed ring. It achieves a high-speed decode rate with very low area overhead. Simulations using Lsim show an average throughput of 32 bits/25 ns on the output side (or 163 MBytes/sec, or 1303 Mbit/sec), corresponding to about 889 Mbit/sec on the input side. The area of the design is extremely small: under 1 mm in a 0.8 micron fullcustom layout. The decoder is estimated to have higher throughput than any comparable synchronous Huffman decoder (after normalizing for feature size and voltage), yet is much smaller than synchronous designs. Its performance is also 83% faster than a recently published asynchronous Huffman decoder using the same technology.", "title": "" }, { "docid": "97841476457ac6599e005367d1ffc5b9", "text": "Robust vigilance estimation during driving is very crucial in preventing traffic accidents. Many approaches have been proposed for vigilance estimation. However, most of the approaches require collecting subject-specific labeled data for calibration which is high-cost for real-world applications. To solve this problem, domain adaptation methods can be used to align distributions of source subject features (source domain) and new subject features (target domain). By reusing existing data from other subjects, no labeled data of new subjects is required to train models. In this paper, our goal is to apply adversarial domain adaptation networks to cross-subject vigilance estimation. We adopt two kinds of recently proposed adversarial domain adaptation networks and compare their performance with those of several traditional domain adaptation methods and the baseline without domain adaptation. A publicly available dataset, SEED-VIG, is used to evaluate the methods. The dataset includes electroencephalography (EEG) and electrooculography (EOG) signals, as well as the corresponding vigilance level annotations during simulated driving. Compared with the baseline, both adversarial domain adaptation networks achieve improvements over 10% in terms of Pearson’s correlation coefficient. In addition, both methods considerably outperform the traditional domain adaptation methods.", "title": "" }, { "docid": "529045d9f2f78b5168ec2c7ca67ea9ab", "text": "The development of a chronic mollusc toxicity test is a current work item on the agenda of the OECD. The freshwater pond snail Lymnaea stagnalis is one of the candidate snail species for such a test. This paper presents a 21-day chronic toxicity test with L. stagnalis, focussing on embryonic development. Eggs were collected from freshly laid egg masses and exposed individually until hatching. The endpoints were hatching success and mean hatching time. Tributyltin (TBT), added as TBT-chloride, was chosen as model substance. The selected exposure concentrations ranged from 0.03 to 10 μg TBT/L (all as nominal values) and induced the full range of responses. The embryos were sensitive to TBT (the NOEC for mean hatching time was 0.03 μg TBT/L and the NOEC for hatching success was 0.1 μg TBT/L). In addition, data on maximum limit concentrations of seven common solvents, recommended in OECD aquatic toxicity testing guidelines, are presented. Among the results, further findings as average embryonic growth and mean hatching time of control groups are provided. In conclusion, the test presented here could easily be standardised and is considered useful as a potential trigger to judge if further studies, e.g. a (partial) life-cycle study with molluscs, should be conducted.", "title": "" }, { "docid": "64c06c6669df3e500df0d3b7fe792160", "text": "New questions about microbial ecology and diversity combined with significant improvement in the resolving power of molecular tools have helped the reemergence of the field of prokaryotic biogeography. Here, we show that biogeography may constitute a cornerstone approach to study diversity patterns at different taxonomic levels in the prokaryotic world. Fundamental processes leading to the formation of biogeographic patterns are examined in an evolutionary and ecological context. Based on different evolutionary scenarios, biogeographic patterns are thus posited to consist of dramatic range expansion or regression events that would be the results of evolutionary and ecological forces at play at the genotype level. The deterministic or random nature of those underlying processes is, however, questioned in light of recent surveys. Such scenarios led us to predict the existence of particular genes whose presence or polymorphism would be associated with cosmopolitan taxa. Furthermore, several conceptual and methodological pitfalls that could hamper future developments of the field are identified, and future approaches and new lines of investigation are suggested.", "title": "" }, { "docid": "ffd0494007a1b82ed6b03aaefd7f8be9", "text": "In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor-feature-based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.", "title": "" }, { "docid": "3b3dcadb00db43fb38cebe0c5105c25b", "text": "This paper explores the capabilities of convolutional neural networks to deal with a task that is easily manageable for humans: perceiving 3D pose of a human body from varying angles. However, in our approach, we are restricted to using a monocular vision system. For this purpose, we apply the convolutional neural networks approach on RGB videos and extend it to three dimensional convolutions. This is done via encoding the time dimension in videos as the 3rd dimension in convolutional space, and directly regressing to human body joint positions in 3D coordinate space. This research shows the ability of such a network to achieve state-of-theart performance on the selected Human3.6M dataset, thus demonstrating the possibility of successfully representing a temporal data with an additional dimension in the convolutional operation.", "title": "" }, { "docid": "865c0c0b4ab0e063e5caa3387c1a8741", "text": "i", "title": "" }, { "docid": "e28f51ea5a09081bd3037a26ca25aebd", "text": "Eye tracking specialists often need to understand and represent aggregate scanning strategies, but methods to identify similar scanpaths and aggregate multiple scanpaths have been elusive. A new method is proposed here to identify scanning strategies by aggregating groups of matching scanpaths automatically. A dataset of scanpaths is first converted to sequences of viewed area names, which are then represented in a dotplot. Matching sequences in the dotplot are found with linear regressions, and then used to cluster the scanpaths hierarchically. Aggregate scanning strategies are generated for each cluster and presented in an interactive dendrogram. While the clustering and aggregation method works in a bottom-up fashion, based on pair-wise matches, a top-down extension is also described, in which a scanning strategy is first input by cursor gesture, then matched against the dataset. The ability to discover both bottom-up and top-down strategy matches provides a powerful tool for scanpath analysis, and for understanding group scanning strategies.", "title": "" } ]
scidocsrr
21112bc4536e1d51ae245f45126d43e0
Linked data partitioning for RDF processing on Apache Spark
[ { "docid": "576aa36956f37b491382b0bdd91f4bea", "text": "The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "title": "" }, { "docid": "efb124a26b0cdc9b022975dd83ec76c8", "text": "Apache Spark is an open-source cluster computing framework for big data processing. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce's linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala (and shortly R), and its core data abstraction, the distributed data frame, and it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. This tutorial will provide an accessible introduction to Spark and its potential to revolutionize academic and commercial data science practices.", "title": "" } ]
[ { "docid": "8ba192226a3c3a4f52ca36587396e85c", "text": "For many years I have been engaged in psychotherapy with individuals in distress. In recent years I have found myself increasingly concerned with the process of abstracting from that experience the general principles which appear to be involved in it. I have endeavored to discover any orderliness, any unity which seems to inhere in the subtle, complex tissue of interpersonal relationship in which I have so constantly been immersed in therapeutic work. One of the current products of this concern is an attempt to state, in formal terms, a theory of psychotherapy, of personality, and of interpersonal relationships which will encompass and contain the phenomena of my experience. What I wish to do in this paper is to take one very small segment of that theory, spell it out more completely, and explore its meaning and usefulness.", "title": "" }, { "docid": "ec369ae7aa038ab688173a7583c51a22", "text": "OBJECTIVE\nTo examine longitudinal associations of parental report of household food availability and parent intakes of fruits, vegetables and dairy foods with adolescent intakes of the same foods. This study expands upon the limited research of longitudinal studies examining the role of parents and household food availability in adolescent dietary intakes.\n\n\nDESIGN\nLongitudinal study. Project EAT-II followed an ethnically and socio-economically diverse sample of adolescents from 1999 (time 1) to 2004 (time 2). In addition to the Project EAT survey, adolescents completed the Youth Adolescent Food-Frequency Questionnaire in both time periods, and parents of adolescents completed a telephone survey at time 1. General linear modelling was used to examine the relationship between parent intake and home availability and adolescent intake, adjusting for time 1 adolescent intakes. Associations were examined separately for the high school and young adult cohorts and separately for males and females in combined cohorts.\n\n\nSUBJECTS/SETTING\nThe sample included 509 pairs of parents/guardians and adolescents.\n\n\nRESULTS\nVegetables served at dinner significantly predicted adolescent intakes of vegetables for males (P = 0.037), females (P = 0.009), high school (P = 0.033) and young adults (P = 0.05) at 5-year follow-up. Among young adults, serving milk at dinner predicted dairy intake (P = 0.002). Time 1 parental intakes significantly predicted intakes of young adults for fruit (P = 0.044), vegetables (P = 0.041) and dairy foods (P = 0.008). Parental intake predicted intake of dairy for females (P = 0.02).\n\n\nCONCLUSIONS\nThe findings suggest the importance of providing parents of adolescents with knowledge and skills to enhance the home food environment and improve their own eating behaviours.", "title": "" }, { "docid": "3882687dfa4f053d6ae128cf09bb8994", "text": "In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and lowlevel features. The proposed TDM architecture provides a significant boost on the COCO benchmark, achieving 28.6 AP for VGG16 and 35.2 AP for ResNet101 networks. Using InceptionResNetv2, our TDM model achieves 37.3 AP, which is the best single-model performance to-date on the COCO testdev benchmark, without any bells and whistles.", "title": "" }, { "docid": "ad808ef13f173eda961b6157a766f1a9", "text": "Written text often provides sufficient clues to identify the author, their gender, age, and other important attributes. Consequently, the authorship of training and evaluation corpora can have unforeseen impacts, including differing model performance for different user groups, as well as privacy implications. In this paper, we propose an approach to explicitly obscure important author characteristics at training time, such that representations learned are invariant to these attributes. Evaluating on two tasks, we show that this leads to increased privacy in the learned representations, as well as more robust models to varying evaluation conditions, including out-of-domain corpora.", "title": "" }, { "docid": "d50b6e7c130080eba98bf4437c333f16", "text": "In this paper we provide a brief review of how out-of-sample methods can be used to construct tests that evaluate a time-series model's ability to predict. We focus on the role that parameter estimation plays in constructing asymptotically valid tests of predictive ability. We illustrate why forecasts and forecast errors that depend upon estimated parameters may have statistical properties that differ from those of their population counterparts. We explain how to conduct asymptotic inference, taking due account of dependence on estimated parameters.", "title": "" }, { "docid": "91affcd02ba981189eeaf25d94657276", "text": "In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical parameters. A comparative analysis is provided by introducing a novel dice loss function and its combination with cross entropy loss. By exploring different network structures and comprehensive experiments, we discuss several key insights to obtain optimal model performance, which also is central to the theme of this challenge.", "title": "" }, { "docid": "121fc3a009e8ce2938f822ba437bdaa3", "text": "Due to an increased awareness and significant environmental pressures from various stakeholders, companies have begun to realize the significance of incorporating green practices into their daily activities. This paper proposes a framework using Fuzzy TOPSIS to select green suppliers for a Brazilian electronics company; our framework is built on the criteria of green supply chain management (GSCM) practices. An empirical analysis is made, and the data are collected from a set of 12 available suppliers. We use a fuzzy TOPSIS approach to rank the suppliers, and the results of the proposed framework are compared with the ranks obtained by both the geometric mean and the graded mean methods of fuzzy TOPSIS methodology. Then a Spearman rank correlation coefficient is used to find the statistical difference between the ranks obtained by the three methods. Finally, a sensitivity analysis has been performed to examine the influence of the preferences given by the decision makers for the chosen GSCM practices on the selection of green suppliers. Results indicate that the four dominant criteria are Commitment of senior management to GSCM; Product designs that reduce, reuse, recycle, or reclaim materials, components, or energy; Compliance with legal environmental requirements and auditing programs; and Product designs that avoid or reduce toxic or hazardous material use. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5125f5099f77a32ff9a1f2054ef1e664", "text": "Human activities are inherently translation invariant and hierarchical. Human activity recognition (HAR), a field that has garnered a lot of attention in recent years due to its high demand in various application domains, makes use of time-series sensor data to infer activities. In this paper, a deep convolutional neural network (convnet) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics of activities and 1D time-series signals, at the same time providing a way to automatically and data-adaptively extract robust features from raw data. Experiments show that convnets indeed derive relevant and more complex features with every additional layer, although difference of feature complexity level decreases with every additional layer. A wider time span of temporal local correlation can be exploited (1x9~1x14) and a low pooling size (1x2~1x3) is shown to be beneficial. Convnets also achieved an almost perfect classification on moving activities, especially very similar ones which were previously perceived to be very difficult to classify. Lastly, convnets outperform other state-of-the-art data mining techniques in HAR for the benchmark dataset collected from 30 volunteer subjects, achieving an overall performance of 94.79% on the test set with raw sensor data, and 95.75% with additional information of temporal fast Fourier transform of the HAR data set.", "title": "" }, { "docid": "7a10f559d9bbf1b6853ff6b89f5857f7", "text": "Despite the much-ballyhooed increase in outsourcing, most companies are in do-it-yourself mode for the bulk of their processes, in large part because there's no way to compare outside organizations' capabilities with those of internal functions. Given the lack of comparability, it's almost surprising that anyone outsources today. But it's not surprising that cost is by far companies' primary criterion for evaluating outsourcers or that many companies are dissatisfied with their outsourcing relationships. A new world is coming, says the author, and it will lead to dramatic changes in the shape and structure of corporations. A broad set of process standards will soon make it easy to determine whether a business capability can be improved by outsourcing it. Such standards will also help businesses compare service providers and evaluate the costs versus the benefits of outsourcing. Eventually these costs and benefits will be so visible to buyers that outsourced processes will become a commodity, and prices will drop significantly. The low costs and low risk of outsourcing will accelerate the flow of jobs offshore, force companies to reassess their strategies, and change the basis of competition. The speed with which some businesses have already adopted process standards suggests that many previously unscrutinized areas are ripe for change. In the field of technology, for instance, the Carnegie Mellon Software Engineering Institute has developed a global standard for software development processes, called the Capability Maturity Model (CMM). For companies that don't have process standards in place, it makes sense for them to create standards by working with customers, competitors, software providers, businesses that processes may be outsourced to, and objective researchers and standard-setters. Setting standards is likely to lead to the improvement of both internal and outsourced processes.", "title": "" }, { "docid": "8564762ca6de73d72236f94bc5fe0a7a", "text": "The current work examines the phenomenon of Virtual Interpersonal Touch (VIT), people touching one another via force-feedback haptic devices. As collaborative virtual environments become utilized more effectively, it is only natural that interactants will have the ability to touch one another. In the current work, we used relatively basic devices to begin to explore the expression of emotion through VIT. In Experiment 1, participants utilized a 2 DOF force-feedback joystick to express seven emotions. We examined various dimensions of the forces generated and subjective ratings of the difficulty of expressing those emotions. In Experiment 2, a separate group of participants attempted to recognize the recordings of emotions generated in Experiment 1. In Experiment 3, pairs of participants attempted to communicate the seven emotions using physical handshakes. Results indicated that humans were above chance when recognizing emotions via VIT, but not as accurate as people expressing emotions through non-mediated handshakes. We discuss a theoretical framework for understanding emotions expressed through touch as well as the implications of the current findings for the utilization of VIT in human computer interaction. Virtual Interpersonal Touch 3 Virtual Interpersonal Touch: Expressing and Recognizing Emotions through Haptic Devices There are many reasons to support the development of collaborative virtual environments (Lanier, 2001). One major criticism of collaborative virtual environments, however, is that they do not provide emotional warmth and nonverbal intimacy (Mehrabian, 1967; Sproull & Kiesler, 1986). In the current work, we empirically explore the augmentation of collaborative virtual environments with simple networked haptic devices to allow for the transmission of emotion through virtual interpersonal touch (VIT). EMOTION IN SOCIAL INTERACTION Interpersonal communication is largely non-verbal (Argyle, 1988), and one of the primary purposes of nonverbal behavior is to communicate subtleties of emotional states between individuals. Clearly, if social interaction mediated by virtual reality and other digital communication systems is to be successful, it will be necessary to allow for a full range of emotional expressions via a number of communication channels. In face-to-face communication, we express emotion primarily through facial expressions, voice, and through touch. While emotion is also communicated through other nonverbal gestures such as posture and hand signals (Cassell & Thorisson, in press; Collier, 1985), in the current review we focus on emotions transmitted via face, voice and touch. In a review of the emotion literature, Ortony and Turner (1990) discuss the concept of basic emotions. These fundamental emotions (e.g., fear) are the building blocks of other more complex emotions (e.g., jealousy). Furthermore, many people argue that these emotions are innate and universal across cultures (Plutchik, 2001). In terms of defining the set of basic emotions, previous work has provided very disparate sets of such emotions. Virtual Interpersonal Touch 4 For example, Watson (1930) has limited his list to “hardwired” emotions such as fear, love, and rage. On the other hand, Ekman & Friesen (1975) have limited their list to those discernable through facial movements such as anger, disgust, fear, joy, sadness, and surprise. The psychophysiology literature adds to our understanding of emotions by suggesting a fundamental biphasic model (Bradley, 2000). In other words, emotions can be thought of as variations on two axes hedonic valence and intensity. Pleasurable emotions have high hedonic valences, while negative emotions have low hedonic valences. This line of research suggests that while emotions may appear complex, much of the variation may nonetheless be mapped onto a two-dimensional scale. This notion also dovetails with research in embodied cognition that has shown that human language is spatially organized (Richardson, Spivey, Edelman, & Naples, 2001). For example, certain words are judged to be more “horizontal” while other words are judged to be more “vertical”. In the current work, we were not concerned predominantly with what constitutes a basic or universal emotion. Instead, we attempted to identify emotions that could be transmitted through virtual touch, and provide an initial framework for classifying and interpreting those digital haptic emotions. To this end, we reviewed theoretical frameworks that have attempted to accomplish this goal with other nonverbal behaviors— most notably facial expressions and paralinguistics. Facial Expressions Research in facial expressions has received much attention from social scientists for the past fifty years. Some researchers argue that the face is a portal to one’s internal mental state (Ekman & Friesen 1978; Izard, 1971). These scholars argue that when an Virtual Interpersonal Touch 5 emotion occurs, a series of biological events follow that produce changes in a person—one of those manifestations is movement in facial muscles. Moreover, these changes in facial expressions are also correlated with other physiological changes such as heart rate or blood pressure (Ekman & Friesen, 1976). Alternatively, other researchers argue that the correspondence of facial expressions to actual emotion is not as high as many think. For example, Fridland (1994) believes that people use facial expressions as a tool to strategically elicit behaviors from others or to accomplish social goals in interaction. Similarly, other researchers argue that not all emotions have corresponding facial expressions (Cacioppo et al., 1997). Nonetheless, most scholars would agree that there is some value to examining facial expressions of another if one’s goal is to gain an understanding of that person’s current mental state. Ekman’s groundbreaking work on emotions has provided tools to begin forming dimensions on which to classify his set of six basic emotions (Ekman & Friesen, 1975). Figure 1 provides a framework for the facial classifications developed by those scholars.", "title": "" }, { "docid": "5e8154a99b4b0cc544cab604b680ebd2", "text": "This work presents performance of robust wearable antennas intended to operate in Wireless Body Area Networks (W-BAN) in UHF, TETRAPOL communication band, 380-400 MHz. We propose a Planar Inverted F Antenna (PIFA) as reliable antenna type for UHF W-BAN applications. In order to satisfy the robustness requirements of the UHF band, both from communication and mechanical aspect, a new technology for building these antennas was proposed. The antennas are built out of flexible conductive sheets encapsulated inside a silicone based elastomer, Polydimethylsiloxane (PDMS). The proposed antennas are resistive to washing, bending and perforating. From the communication point of view, opting for a PIFA antenna type we solve the problem of coupling to the wearer and thus improve the overall communication performance of the antenna. Several different tests and comparisons were performed in order to check the stability of the proposed antennas when they are placed on the wearer or left in a common everyday environ- ment, on the ground, table etc. S11 deviations are observed and compared with the commercially available wearable antennas. As a final check, the antennas were tested in the frame of an existing UHF TETRAPOL communication system. All the measurements were performed in a real university campus scenario, showing reliable and good performance of the proposed PIFA antennas.", "title": "" }, { "docid": "213daea0f909e9731aa77e001c447654", "text": "In the wake of a polarizing election, social media is laden with hateful content. To address various limitations of supervised hate speech classification methods including corpus bias and huge cost of annotation, we propose a weakly supervised twopath bootstrapping approach for an online hate speech detection model leveraging large-scale unlabeled data. This system significantly outperforms hate speech detection systems that are trained in a supervised manner using manually annotated data. Applying this model on a large quantity of tweets collected before, after, and on election day reveals motivations and patterns of inflammatory language.", "title": "" }, { "docid": "2629277b98d661006e90358fa27f4ac5", "text": "In this paper, a well known problem called the Shortest Path Problem (SPP) has been considered in an uncertain environment. The cost parameters for traveling each arc have been considered as Intuitionistic Fuzzy Numbers (IFNs) which are the more generalized form of fuzzy numbers involving a degree of acceptance and a degree of rejection. A heuristic methodology for solving the SPP has been developed, which aim to exploit tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low cost solution corresponding to the minimum-cost path or the shortest path. The Modified Intuitionistic Fuzzy Dijkstra’s Algorithm (MIFDA) has been proposed in this paper for solving Intuitionistic Fuzzy Shortest Path Problem (IFSPP) using the Intuitionistic Fuzzy Hybrid Geometric (IFHG) operator. A numerical example illustrates the effectiveness of the proposed method.", "title": "" }, { "docid": "189709296668a8dd6f7be8e1b2f2e40f", "text": "Uncertain data management, querying and mining have become important because the majority of real world data is accompanied with uncertainty these days. Uncertainty in data is often caused by the deficiency in underlying data collecting equipments or sometimes manually introduced to preserve data privacy. This work discusses the problem of distance-based outlier detection on uncertain datasets of Gaussian distribution. The Naive approach of distance-based outlier on uncertain data is usually infeasible due to expensive distance function. Therefore a cell-based approach is proposed in this work to quickly identify the outliers. The infinite nature of Gaussian distribution prevents to devise effective pruning techniques. Therefore an approximate approach using bounded Gaussian distribution is also proposed. Approximating Gaussian distribution by bounded Gaussian distribution enables an approximate but more efficient cell-based outlier detection approach. An extensive empirical study on synthetic and real datasets show that our proposed approaches are effective, efficient and scalable.", "title": "" }, { "docid": "869ad7b6bf74f283c8402958a6814a21", "text": "In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients’ self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis.", "title": "" }, { "docid": "bee25514d15321f4f0bdcf867bb07235", "text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.", "title": "" }, { "docid": "ff2beca595c408f3ea5df6a8494301c4", "text": "The objective of the present study was to examine to what extent autonomy in problem-based learning (PBL) results in cognitive engagement with the topic at hand. To that end, a short self-report instrument was devised and validated. Moreover, it was examined how cognitive engagement develops as a function of the learning process and the extent to which cognitive engagement determines subsequent levels of cognitive engagement during a one-day PBL event. Data were analyzed by means of confirmatory factor analysis, repeated measures ANOVA, and path analysis. The results showed that the new measure of situational cognitive engagement is valid and reliable. Furthermore, the results revealed that students' cognitive engagement significantly increased as a function of the learning event. Implications of these findings for PBL are discussed.", "title": "" }, { "docid": "84f9a6913a7689a5bbeb04f3173237b2", "text": "BACKGROUND\nPsychosocial treatments are the mainstay of management of autism in the UK but there is a notable lack of a systematic evidence base for their effectiveness. Randomised controlled trial (RCT) studies in this area have been rare but are essential because of the developmental heterogeneity of the disorder. We aimed to test a new theoretically based social communication intervention targeting parental communication in a randomised design against routine care alone.\n\n\nMETHODS\nThe intervention was given in addition to existing care and involved regular monthly therapist contact for 6 months with a further 6 months of 2-monthly consolidation sessions. It aimed to educate parents and train them in adapted communication tailored to their child's individual competencies. Twenty-eight children with autism were randomised between this treatment and routine care alone, stratified for age and baseline severity. Outcome was measured at 12 months from commencement of intervention, using standardised instruments.\n\n\nRESULTS\nAll cases studied met full Autism Diagnostic Interview (ADI) criteria for classical autism. Treatment and controls had similar routine care during the study period and there were no study dropouts after treatment had started. The active treatment group showed significant improvement compared with controls on the primary outcome measure--Autism Diagnostic Observation Schedule (ADOS) total score, particularly in reciprocal social interaction--and on secondary measures of expressive language, communicative initiation and parent-child interaction. Suggestive but non-significant results were found in Vineland Adaptive Behaviour Scales (Communication Sub-domain) and ADOS stereotyped and restricted behaviour domain.\n\n\nCONCLUSIONS\nA Randomised Treatment Trial design of this kind in classical autism is feasible and acceptable to patients. This pilot study suggests significant additional treatment benefits following a targeted (but relatively non-intensive) dyadic social communication treatment, when compared with routine care. The study needs replication on larger and independent samples. It should encourage further RCT designs in this area.", "title": "" }, { "docid": "08bde5682e7fe0c775fabb7c051ab3db", "text": "We propose a higher-level associative memory for learning adversarial networks. Generative adversarial network (GAN) framework has a discriminator and a generator network. The generator (G) maps white noise (z) to data samples while the discriminator (D) maps data samples to a single scalar. To do so, G learns how to map from high-level representation space to data space, and D learns to do the opposite. We argue that higher-level representation spaces need not necessarily follow a uniform probability distribution. In this work, we use Restricted Boltzmann Machines (RBMs) as a higher-level associative memory and learn the probability distribution for the high-level features generated by D. The associative memory samples its underlying probability distribution and G learns how to map these samples to data space. The proposed associative adversarial networks (AANs) are generative models in the higher-levels of the learning, and use adversarial nonstochastic models D and G for learning the mapping between data and higher-level representation spaces. Experiments show the potential of the proposed networks.", "title": "" }, { "docid": "f33f6263ef10bd702ddb18664b68a09f", "text": "Research over the past five years has shown significant performance improvements using a technique called adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to find the combination of optimizations and parameters that minimizes some performance goal, such as code size or execution time.Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the large amounts of time that such systems have used to perform the many compilations and executions prohibits most users from adopting these systems, and the complexity inherent in a feedback-driven adaptive system has made it difficult to build and hard to use.A significant portion of the adaptive compilation process is devoted to multiple executions of the code being compiled. We have developed a technique called virtual execution to address this problem. Virtual execution runs the program a single time and preserves information that allows us to accurately predict the performance of different optimization sequences without running the code again. Our prototype implementation of this technique significantly reduces the time required by our adaptive compiler.In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. By providing appropriate defaults, the interface limits the amount of information that the user must provide to get started. At the same time, it lets the experienced user exert fine-grained control over the parameters that control the system.", "title": "" } ]
scidocsrr
1a79befd9ec0c261b53de534e9e195f7
Bilingualism provides a neural reserve for aging populations
[ { "docid": "a64ae2e6e72b9e38c700ddd62b4f6bf3", "text": "Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.", "title": "" } ]
[ { "docid": "e49f04ff71d0718eff9a3a6005b2a689", "text": "Energy-Based Models (EBMs) capture dependencies between v ariables by associating a scalar energy to each configuration of the variab les. Inference consists in clamping the value of observed variables and finding config urations of the remaining variables that minimize the energy. Learning consi sts in finding an energy function in which observed configurations of the variables a re given lower energies than unobserved ones. The EBM approach provides a common the re ical framework for many learning models, including traditional discr minative and generative approaches, as well as graph-transformer networks, co nditi nal random fields, maximum margin Markov networks, and several manifold learn ing methods. Probabilistic models must be properly normalized, which so metimes requires evaluating intractable integrals over the space of all poss ible variable configurations. Since EBMs have no requirement for proper normalizat ion, his problem is naturally circumvented. EBMs can be viewed as a form of non-p robabilistic factor graphs, and they provide considerably more flexibility in th e design of architectures and training criteria than probabilistic approaches .", "title": "" }, { "docid": "5a61a6249b389a26d439f9a66efcc5f5", "text": "The vast majority of current robot mapping and navigation systems require specific, well-characterized sensors that may require human-supervised calibration and are applicable only in one type of environment. Furthermore, if a sensor degrades in performance, either through damage to itself or changes in environmental conditions, the effect on the mapping system is usually catastrophic. In contrast, the natural world presents robust, reasonably well-characterized solutions to these problems. Using simple movement behaviors and neural learning mechanisms, rats calibrate their sensors for mapping and navigation in an incredibly diverse range of environments and then go on to adapt to sensor damage and changes in the environment over their lifetimes. In this paper, we introduce similar movement-based autonomous calibration techniques that calibrate place recognition and self-motion processes as well as methods for online multi-sensor weighting and fusion. We present calibration and mapping results from multiple robot platforms and multisensory configurations in an office building, university campus and forest. With moderate assumptions and almost no prior knowledge of the robot, sensor suite or environment, the methods enable the bio-inspired RatSLAM system to generate topologically correct maps in the majority of experiments.", "title": "" }, { "docid": "6cf4297e4c87f8e55d59867ac137e56d", "text": "We present a novel approach to RTE that exploits a structure-oriented sentence representation followed by a similarity function. The structural features are automatically acquired from tree skeletons that are extracted and generalized from dependency trees. Our method makes use of a limited size of training data without any external knowledge bases (e.g. WordNet) or handcrafted inference rules. We have achieved an accuracy of 71.1% on the RTE-3 development set performing a 10-fold cross validation and 66.9% on the RTE-3 test data.", "title": "" }, { "docid": "5e858796f025a9e2b91109835d827c68", "text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.", "title": "" }, { "docid": "fb039b1837209a3f3c01289d9adc275b", "text": "This paper presents a comprehensive research study of the detection of US traffic signs. Until now, the research in Traffic Sign Recognition systems has been centered on European traffic signs, but signs can look very different across different parts of the world, and a system which works well in Europe may indeed not work in the US. We go over the recent advances in traffic sign detection and discuss the differences in signs across the world. Then we present a comprehensive extension to the publicly available LISA-TS traffic sign dataset, almost doubling its size, now with HD-quality footage. The extension is made with testing of tracking sign detection systems in mind, providing videos of traffic sign passes. We apply the Integral Channel Features and Aggregate Channel Features detection methods to US traffic signs and show performance numbers outperforming all previous research on US signs (while also performing similarly to the state of the art on European signs). Integral Channel Features have previously been used successfully for European signs, while Aggregate Channel Features have never been applied to the field of traffic signs. We take a look at the performance differences between the two methods and analyze how they perform on very distinctive signs, as well as white, rectangular signs, which tend to blend into their environment.", "title": "" }, { "docid": "d135e72c317ea28a64a187b17541f773", "text": "Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and physiology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject.", "title": "" }, { "docid": "c0ef15616ba357cb522b828e03a5298c", "text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.", "title": "" }, { "docid": "a28567e108f00e3b251882404f2574b2", "text": "Sirs: A 46-year-old woman was referred to our hospital because of suspected cerebral ischemia. Two days earlier the patient had recognized a left-sided weakness and clumsiness. On neurological examination we found a mild left-sided hemiparesis and hemiataxia. There was a generalized shrinking violaceous netlike pattering of the skin especially on both legs and arms but also on the trunk and buttocks (Fig. 1). The patient reported the skin changing to be more prominent on cold exposure. The patient’s family remembered this skin finding to be evident since the age of five years. A diagnosis of livedo racemosa had been made 5 years ago. The neuropsychological assessment of this highly educated civil servant revealed a slight cognitive decline. MRI showed a right-sided cerebral ischemia in the middle cerebral artery (MCA) territory. Her medical history was significant for migraine-like headache for many years, a miscarriage 18 years before and a deep vein thrombosis of the left leg six years ago. She had no history of smoking or other cerebrovascular risk factors including no estrogen-containing oral contraceptives. The patient underwent intensive examinations including duplex sonography of extraand intracranial arteries, transesophageal echocardiography, 24-h ECG, 24-h blood pressure monitoring, multimodal evoked potentials, electroencephalography, lumbar puncture and sonography of abdomen. All these tests were negative. Extensive laboratory examinations revealed a heterozygote prothrombin 20210 mutation, which is associated with a slightly increased risk for thrombosis. Antiphospholipid antibodies (aplAB) and other laboratory examinations to exclude vasculitis, toxic metabolic disturbances and other causes for livedo racemosa were negative. Skin biopsy showed vasculopathy with intimal proliferation and an occluding thrombus. The patient was diagnosed as having antiphospholipid-antibodynegative Sneddon’s syndrome (SS) based on cerebral ischemia combined with wide-spread livedo racemosa associated with a history of miscarriage, deep vein thrombosis, migraine like headaches and mild cognitive decline. We started long-term prophylactic pharmacological therapy with captopril as a myocyte proliferation agent and with aspirin as an antiplatelet therapy. Furthermore we recommended thrombosis prophylaxis in case of immobilization. One month later the patient experienced vein thrombosis of her right forearm and suffered from dyspnea. Antiphospholipid antibody testing again was negative. EBT and CT of thorax showed an aneurysmatic dilatation of aorta ascendens up to 4.5 cm. After careful consideration of the possible disadvantages we nevertheless decided to start long-term anticoagulation instead of antiplatelet therapy because of the second thrombotic event. The elucidating and interesting issue of this case is the association of miscarriage and two vein thromboses in aplAB-negative SS. Little is known about this phenomenon and there are only a few reports about these symptoms in aplABLETTER TO THE EDITORS", "title": "" }, { "docid": "3fec27391057a4c14f2df5933c4847d8", "text": "This article explains how entrepreneurship can help resolve the environmental problems of global socio-economic systems. Environmental economics concludes that environmental degradation results from the failure of markets, whereas the entrepreneurship literature argues that opportunities are inherent in market failure. A synthesis of these literatures suggests that environmentally relevant market failures represent opportunities for achieving profitability while simultaneously reducing environmentally degrading economic behaviors. It also implies conceptualizations of sustainable and environmental entrepreneurship which detail how entrepreneurs seize the opportunities that are inherent in environmentally relevant market failures. Finally, the article examines the ability of the proposed theoretical framework to transcend its environmental context and provide insight into expanding the domain of the study of entrepreneurship. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "f80ff6bbe60fea2424cbbada4280b79e", "text": "While parents have a critical influence on reducing adolescent risk taking, adolescents' access to online spaces presents significant and novel challenges to parents' ability to reduce their youth's involvement in cyberbullying. The present study reviews the existing literature on parents' influence (i.e., parental warmth and parental monitoring) on adolescent cyberbullying, both as victims and perpetrators. 23 mostly cross sectional articles were identified for this review. Findings indicate that parental warmth is consistently associated with lower cyberbullying, both as victims and perpetrators. For parental monitoring, strategies that are focused on parental control, such as restricting the Internet, appear to be only weakly related to youth's involvement in cyberbullying victimization and perpetration. In contrast, strategies that are more collaborative with in nature (e.g., evaluative mediation and co-use) are more closely connected to cyberbullying victimization and perpetration, although evidence suggests that the effectiveness of these practices varies by sex and ethnicity. Results underscore the need for parents to provide emotional warmth that might support adolescent's disclosure of online activity. Implications for practice and future research are reviewed.", "title": "" }, { "docid": "db0c7a200d76230740e027c2966b066c", "text": "BACKGROUND\nPromotion and provision of low-cost technologies that enable improved water, sanitation, and hygiene (WASH) practices are seen as viable solutions for reducing high rates of morbidity and mortality due to enteric illnesses in low-income countries. A number of theoretical models, explanatory frameworks, and decision-making models have emerged which attempt to guide behaviour change interventions related to WASH. The design and evaluation of such interventions would benefit from a synthesis of this body of theory informing WASH behaviour change and maintenance.\n\n\nMETHODS\nWe completed a systematic review of existing models and frameworks through a search of related articles available in PubMed and in the grey literature. Information on the organization of behavioural determinants was extracted from the references that fulfilled the selection criteria and synthesized. Results from this synthesis were combined with other relevant literature, and from feedback through concurrent formative and pilot research conducted in the context of two cluster-randomized trials on the efficacy of WASH behaviour change interventions to inform the development of a framework to guide the development and evaluation of WASH interventions: the Integrated Behavioural Model for Water, Sanitation, and Hygiene (IBM-WASH).\n\n\nRESULTS\nWe identified 15 WASH-specific theoretical models, behaviour change frameworks, or programmatic models, of which 9 addressed our review questions. Existing models under-represented the potential role of technology in influencing behavioural outcomes, focused on individual-level behavioural determinants, and had largely ignored the role of the physical and natural environment. IBM-WASH attempts to correct this by acknowledging three dimensions (Contextual Factors, Psychosocial Factors, and Technology Factors) that operate on five-levels (structural, community, household, individual, and habitual).\n\n\nCONCLUSIONS\nA number of WASH-specific models and frameworks exist, yet with some limitations. The IBM-WASH model aims to provide both a conceptual and practical tool for improving our understanding and evaluation of the multi-level multi-dimensional factors that influence water, sanitation, and hygiene practices in infrastructure-constrained settings. We outline future applications of our proposed model as well as future research priorities needed to advance our understanding of the sustained adoption of water, sanitation, and hygiene technologies and practices.", "title": "" }, { "docid": "2e0547228597476a28c6b99b6f927299", "text": "Several virtual reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 10 years. The purpose of this review is to outline the current state of virtual reality research in the treatment of mental health problems. PubMed and PsycINFO were searched for all articles containing the words “virtual reality”. In addition a manual search of the references contained in the papers resulting from this search was conducted and relevant periodicals were searched. Studies reporting the results of treatment utilizing VR in the mental health field and involving at least one patient were identified. More than 50 studies using VR were identified, the majority of which were case studies. Seventeen employed a between groups design: 4 involved patients with fear of flying; 3 involved patients with fear of heights; 3 involved patients with social phobia/public speaking anxiety; 2 involved people with spider phobia; 2 involved patients with agoraphobia; 2 involved patients with body image disturbance and 1 involved obese patients. There are both advantages in terms of delivery and disadvantages in terms of side effects to using VR. Although virtual reality based therapy appears to be superior to no treatment the effectiveness of VR therapy over traditional therapeutic approaches is not supported by the research currently available. There is a lack of good quality research on the effectiveness of VR therapy. Before clinicians will be able to make effective use of this emerging technology greater emphasis must be placed on controlled trials with clinically identified populations.", "title": "" }, { "docid": "e2308b435dddebc422ff49a7534bbf83", "text": "Memory encryption has yet to be used at the core of operating system designs to provide confidentiality of code and data. As a result, numerous vulnerabilities exist at every level of the software stack. Three general approaches have evolved to rectify this problem. The most popular approach is based on complex hardware enhancements; this allows all encryption and decryption to be conducted within a well-defined trusted boundary. Unfortunately, these designs have not been integrated within commodity processors and have primarily been explored through simulation with very few prototypes. An alternative approach has been to augment existing hardware with operating system enhancements for manipulating keys, providing improved trust. This approach has provided insights into the use of encryption but has involved unacceptable overheads and has not been adopted in commercial operating systems. Finally, specialized industrial devices have evolved, potentially adding coprocessors, to increase security of particular operations in specific operating environments. However, this approach lacks generality and has introduced unexpected vulnerabilities of its own. Recently, memory encryption primitives have been integrated within commodity processors such as the Intel i7, AMD bulldozer, and multiple ARM variants. This opens the door for new operating system designs that provide confidentiality across the entire software stack outside the CPU. To date, little practical experimentation has been conducted, and the improvements in security and associated performance degradation has yet to be quantified. This article surveys the current memory encryption literature from the viewpoint of these central issues.", "title": "" }, { "docid": "d09d37920360740d7a6eafb9c546da02", "text": "In this work, we study the paralinguistic speech task of eating condition classification and present our submitted classification system for the INTERSPEECH 2015 Computational Paralinguistics challenge. We build upon a deep learning language identification system, which we repurpose for general audio sequence classification. The main idea is that we train local convolutional neural network classifiers that automatically learn representations on smaller windows of the full sequence’s spectrum and to aggregate multiple local classifications towards a full sequence classification. A particular challenge of the task is training data scarcity and the resulting overfitting of neural network methods, which we tackle with dropout, synthetic data augmentation and transfer learning with out-of-domain data from a language identification task. Our final submitted system achieved an UAR score of 75.9% for 7-way eating condition classification, which is a relative improvement of 15% over the baseline.", "title": "" }, { "docid": "5afe5504566e60cbbb50f83501eee06c", "text": "This paper explores theoretical issues in ergonomics related to semantics and the emotional content of design. The aim is to find answers to the following questions: how to design products triggering \"happiness\" in one's mind; which product attributes help in the communication of positive emotions; and finally, how to evoke such emotions through a product. In other words, this is an investigation of the \"meaning\" that could be designed into a product in order to \"communicate\" with the user at an emotional level. A literature survey of recent design trends, based on selected examples of product designs and semantic applications to design, including the results of recent design awards, was carried out in order to determine the common attributes of their design language. A review of Good Design Award winning products that are said to convey and/or evoke emotions in the users has been done in order to define good design criteria. These criteria have been discussed in relation to user emotional responses and a selection of these has been given as examples.", "title": "" }, { "docid": "670b7dfafc95a82e444c86dd7d5afeb6", "text": "We investigate the use of attentional neural network layers in order to learn a ‘behavior characterization’ which can be used to drive novelty search and curiosity-based policies. The space is structured towards answering a particular distribution of questions, which are used in a supervised way to train the attentional neural network. We find that in a 2d exploration task, the structure of the space successfully encodes local sensory-motor contingencies such that even a greedy local ‘do the most novel action’ policy with no reinforcement learning or evolution can explore the space quickly. We also apply this to a high/low number guessing game task, and find that guessing according to the learned attention profile performs active inference and can discover the correct number more quickly than an exact but passive approach.", "title": "" }, { "docid": "4129d2906d3d3d96363ff0812c8be692", "text": "In this paper, we propose a picture recommendation system built on Instagram, which facilitates users to query correlated pictures by keying in hashtags or clicking images. Users can access the value-added information (or pictures) on Instagram through the recommendation platform. In addition to collecting available hashtags using the Instagram API, the system also uses the Free Dictionary to build the relationships between all the hashtags in a knowledge base. Thus, two kinds of correlations can be provided for a query in the system; i.e., user-defined correlation and system-defined correlation. Finally, the experimental results show that users have good satisfaction degrees with both user-defined correlation and system-defined correlation methods.", "title": "" }, { "docid": "00280615cb28a6f16bde541af2bc356d", "text": "Querying with an example image is a simple and intuitive interface to retrieve information from a visual database. Most of the research in image retrieval has focused on the task of instance-level image retrieval, where the goal is to retrieve images that contain the same object instance as the query image. In this work we move beyond instance-level retrieval and consider the task of semantic image retrieval in complex scenes, where the goal is to retrieve images that share the same semantics as the query image. We show that, despite its subjective nature, the task of semantically ranking visual scenes is consistently implemented across a pool of human annotators. We also show that a similarity based on human-annotated region-level captions is highly correlated with the human ranking and constitutes a good computable surrogate. Following this observation, we learn a visual embedding of the images where the similarity in the visual space is correlated with their semantic similarity surrogate. We further extend our model to learn a joint embedding of visual and textual cues that allows one to query the database using a text modifier in addition to the query image, adapting the results to the modifier. Finally, our model can ground the ranking decisions by showing regions that contributed the most to the similarity between pairs of images, providing a visual explanation of the similarity.", "title": "" }, { "docid": "b83cd79ce5086124ab7920ab589e61bf", "text": "Many of today’s most successful video segmentation methods use long-term feature trajectories as their first processing step. Such methods typically use spectral clustering to segment these trajectories, implicitly assuming that motion is translational in image space. In this paper, we explore the idea of explicitly fitting more general motion models in order to classify trajectories as foreground or background. We find that homographies are sufficient to model a wide variety of background motions found in real-world videos. Our simple approach achieves competitive performance on the DAVIS benchmark, while using techniques complementary to state-of-the-art approaches.", "title": "" }, { "docid": "bab949abe2d00567853504e38c84a1c9", "text": "7SK RNA is a key player in the regulation of polymerase II transcription. 7SK RNA was considered as a highly conserved vertebrate innovation. The discovery of poorly conserved homologs in several insects and lophotrochozoans, however, implies a much earlier evolutionary origin. The mechanism of 7SK function requires interaction with the proteins HEXIM and La-related protein 7. Here, we present a comprehensive computational analysis of these two proteins in metazoa, and we extend the collection of 7SK RNAs by several additional candidates. In particular, we describe 7SK homologs in Caenorhabditis species. Furthermore, we derive an improved secondary structure model of 7SK RNA, which shows that the structure is quite well-conserved across animal phyla despite the extreme divergence at sequence level.", "title": "" } ]
scidocsrr
dd9521b32c5cadcf5a2878d40ccb0534
Influence Maximization on Social Graphs: A Survey
[ { "docid": "8a7f59d73f202267bf0e52d758396975", "text": "We consider the combinatorial optimization problem of finding the most influential nodes on a large-scale social network for two widely-used fundamental stochastic diffusion models. It was shown that a natural greedy strategy can give a good approximate solution to this optimization problem. However, a conventional method under the greedy algorithm needs a large amount of computation, since it estimates the marginal gains for the expected number of nodes influenced by a set of nodes by simulating the random process of each model many times. In this paper, we propose a method of efficiently estimating all those quantities on the basis of bond percolation and graph theory, and apply it to approximately solving the optimization problem under the greedy algorithm. Using real-world large-scale networks including blog networks, we experimentally demonstrate that the proposed method can outperform the conventional method, and achieve a large reduction in computational cost.", "title": "" } ]
[ { "docid": "3c07ea072adb8f63b3cba36e39974d87", "text": "We describe a general methodology for the design of large-sc ale recursive neural network architectures (DAG-RNNs) which comprises three fundamental steps: (1) representation of a given domain using suitable directed acyclic graphs (DAGs) to connect vi sible and hidden node variables; (2) parameterization of the relationship between each variabl e nd its parent variables by feedforward neural networks; and (3) application of weight-sharing wit hin appropriate subsets of DAG connections to capture stationarity and control model complexity . Here we use these principles to derive severalspecificclasses of DAG-RNN architectures based on lattices, trees, and other structured graphs. These architectures can process a wide range of data structures with variable sizes and dimensions. While the overall resulting models remain prob abilistic, the internal deterministic dynamics allows efficient propagation of information, as well as training by gradient descent, in order to tackle large-scale problems. These methods are used here to derive state-of-the-art predictors for protein structural features such as secondary structur e (1D) and both fineand coarse-grained contact maps (2D). Extensions, relationships to graphical models, and implications for the design of neural architectures are briefly discussed. The protein p rediction servers are available over the Web at:www.igb.uci.edu/tools.htm.", "title": "" }, { "docid": "7a1f625799740f4f6a9f162fd200648d", "text": "We present the first work on antecedent selection for bridging resolution without restrictions on anaphor or relation types. Our model integrates global constraints on top of a rich local feature set in the framework of Markov logic networks. The global model improves over the local one and both strongly outperform a reimplementation of prior work.", "title": "" }, { "docid": "acd6c7715fb1e15a123778033672f070", "text": "Classical statistical inference of experimental data assumes that the treatment affects the test group but not the control group. This assumption will typically be violated when experimenting in marketplaces because of general equilibrium effects: changing test demand affects the supply available to the control group. We illustrate this with an email marketing campaign performed by eBay. Ignoring test-control interference leads to estimates of the campaign's effectiveness which are too large by a factor of around two. We present the simple economics of this bias in a supply and demand framework, showing that the bias is larger in magnitude where there is more inelastic supply, and is positive if demand is elastic.", "title": "" }, { "docid": "2a487ff4b9218900e9a0e480c23e4c25", "text": "5.1 CONVENTIONAL ACTUATORS, SHAPE MEMORY ALLOYS, AND ELECTRORHEOLOGICAL FLUIDS ............................................................................................................................................................. 1 5.1.", "title": "" }, { "docid": "b030ac3b5ae779744357d6778eb1ffc5", "text": "The American and Russian/Soviet space programs independently uncovered psychosocial risks inherent in long-duration space missions. Now that these two countries are working together on the International Space Station (ISS), American-Russian cultural differences pose an additional set of risk factors. These may echo cultural differences that have been observed in the general population of the two countries and in space analogue settings, but little is known about how relevant these are to the select population of space program personnel. The evidence for the existence of mission-relevant cultural differences is reviewed and includes cultural values, emotional expressivity, personal space norms, and personality characteristics. The review is focused primarily on Russia and the United States, but also includes other ISS partner countries. Cultural differences among space program personnel may have a wide range of effects. Moreover, culture-related strains may increase the probability of distress and impairment. Such factors could affect the individual and interpersonal functioning of both crewmembers and mission control personnel, whose performance is also critical for mission safety and success. Examples from the anecdotal and empirical literature are given to illustrate these points. The use of existing assessment strategies runs the risk of overlooking important early warning signs of behavioral health difficulties. By paying more attention to cultural differences and how they might be manifested, we are more likely to detect problems early while they are still mild and resolvable.", "title": "" }, { "docid": "86c998f5ffcddb0b74360ff27b8fead4", "text": "Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.", "title": "" }, { "docid": "0e2b885774f69342ade2b9ad1bc84835", "text": "History repeatedly demonstrates that rural communities have unique technological needs. Yet, we know little about how rural communities use modern technologies, so we lack knowledge on how to design for them. To address this gap, our empirical paper investigates behavioral differences between more than 3,000 rural and urban social media users. Using a dataset collected from a broadly popular social network site, we analyze users' profiles, 340,000 online friendships and 200,000 interpersonal messages. Using social capital theory, we predict differences between rural and urban users and find strong evidence supporting our hypotheses. Namely, rural people articulate far fewer friends online, and those friends live much closer to home. Our results also indicate that the groups have substantially different gender distributions and use privacy features differently. We conclude by discussing design implications drawn from our findings; most importantly, designers should reconsider the binary friend-or-not model to allow for incremental trust-building.", "title": "" }, { "docid": "de39f498f28cf8cfc01f851ca3582d32", "text": "Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently.\n This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 7 distinct projects and 16 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.", "title": "" }, { "docid": "771b1e44b26f749f6ecd9fe515159d9c", "text": "In spoken dialog systems, dialog state tracking refers to the task of correctly inferring the user's goal at a given turn, given all of the dialog history up to that turn. This task is challenging because of speech recognition and language understanding errors, yet good dialog state tracking is crucial to the performance of spoken dialog systems. This paper presents results from the third Dialog State Tracking Challenge, a research community challenge task based on a corpus of annotated logs of human-computer dialogs, with a blind test set evaluation. The main new feature of this challenge is that it studied the ability of trackers to generalize to new entities - i.e. new slots and values not present in the training data. This challenge received 28 entries from 7 research teams. About half the teams substantially exceeded the performance of a competitive rule-based baseline, illustrating not only the merits of statistical methods for dialog state tracking but also the difficulty of the problem.", "title": "" }, { "docid": "c974e6b4031fde2b8e1de3ade33caef4", "text": "A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments.", "title": "" }, { "docid": "845eb6625d9e9839800e24830e454906", "text": "Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm(3) brain template in 4-6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/).", "title": "" }, { "docid": "4b999743e8032c8ef1b0a5f63888e832", "text": "In this paper we describe a novel approach to autonomous dirt road following. The algorithm is able to recognize highly curved roads in cluttered color images quite often appearing in offroad scenarios. To cope with large curvatures we apply gaze control and model the road using two different clothoid segments. A Particle Filter incorporating edge and color intensity information is used to simultaneously detect and track the road farther away from the ego vehicle. In addition the particles are used to generate static road segment estimations in a given look ahead distance. These estimations are predicted with respect to ego motion and fused utilizing Kalman filter techniques to generate a smooth local clothoid segment for lateral control of the vehicle.", "title": "" }, { "docid": "e58d7f537b0d703fa1381eee2d721a34", "text": "BACKGROUND\nProvision of high quality transitional care is a challenge for health care providers in many western countries. This systematic review was conducted to (1) identify and synthesise research, using randomised control trial designs, on the quality of transitional care interventions compared with standard hospital discharge for older people with chronic illnesses, and (2) make recommendations for research and practice.\n\n\nMETHODS\nEight databases were searched; CINAHL, Psychinfo, Medline, Proquest, Academic Search Complete, Masterfile Premier, SocIndex, Humanities and Social Sciences Collection, in addition to the Cochrane Collaboration, Joanna Briggs Institute and Google Scholar. Results were screened to identify peer reviewed journal articles reporting analysis of quality indicator outcomes in relation to a transitional care intervention involving discharge care in hospital and follow-up support in the home. Studies were limited to those published between January 1990 and May 2013. Study participants included people 60 years of age or older living in their own homes who were undergoing care transitions from hospital to home. Data relating to study characteristics and research findings were extracted from the included articles. Two reviewers independently assessed studies for risk of bias.\n\n\nRESULTS\nTwelve articles met the inclusion criteria. Transitional care interventions reported in most studies reduced re-hospitalizations, with the exception of general practitioner and primary care nurse models. All 12 studies included outcome measures of re-hospitalization and length of stay indicating a quality focus on effectiveness, efficiency, and safety/risk. Patient satisfaction was assessed in six of the 12 studies and was mostly found to be high. Other outcomes reflecting person and family centred care were limited including those pertaining to the patient and carer experience, carer burden and support, and emotional support for older people and their carers. Limited outcome measures were reported reflecting timeliness, equity, efficiencies for community providers, and symptom management.\n\n\nCONCLUSIONS\nGaps in the evidence base were apparent in the quality domains of timeliness, equity, efficiencies for community providers, effectiveness/symptom management, and domains of person and family centred care. Further research that involves the person and their family/caregiver in transitional care interventions is needed.", "title": "" }, { "docid": "3f1f3e66fa1a117ef5c2f44d8f7dcbe8", "text": "The Softmax function is used in the final layer of nearly all existing sequence-tosequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for replacing the softmax layer with a continuous embedding layer. Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax. We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation. We show that our models train up to 2.5x faster than the state-of-the-art models while achieving comparable translation quality. These models are capable of handling very large vocabularies without compromising on translation quality or speed. They also produce more meaningful errors than the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations1.", "title": "" }, { "docid": "cfff07dbbc363a3e64b94648e19f2e4b", "text": "Nitrogen (N) starvation and excess have distinct effects on N uptake and metabolism in poplars, but the global transcriptomic changes underlying morphological and physiological acclimation to altered N availability are unknown. We found that N starvation stimulated the fine root length and surface area by 54 and 49%, respectively, decreased the net photosynthetic rate by 15% and reduced the concentrations of NH4+, NO3(-) and total free amino acids in the roots and leaves of Populus simonii Carr. in comparison with normal N supply, whereas N excess had the opposite effect in most cases. Global transcriptome analysis of roots and leaves elucidated the specific molecular responses to N starvation and excess. Under N starvation and excess, gene ontology (GO) terms related to ion transport and response to auxin stimulus were enriched in roots, whereas the GO term for response to abscisic acid stimulus was overrepresented in leaves. Common GO terms for all N treatments in roots and leaves were related to development, N metabolism, response to stress and hormone stimulus. Approximately 30-40% of the differentially expressed genes formed a transcriptomic regulatory network under each condition. These results suggest that global transcriptomic reprogramming plays a key role in the morphological and physiological acclimation of poplar roots and leaves to N starvation and excess.", "title": "" }, { "docid": "3f2d4df1b0ef315ee910636c9439b049", "text": "Real-Time Line and Disk Light Shading\n Eric Heitz and Stephen Hill\n At SIGGRAPH 2016, we presented a new real-time area lighting technique for polygonal sources. In this talk, we will show how the underlying framework, based on Linearly Transformed Cosines (LTCs), can be extended to support line and disk lights. We will discuss the theory behind these approaches as well as practical implementation tips and tricks concerning numerical precision and performance.\n Physically Based Shading at DreamWorks Animation\n Feng Xie and Jon Lanz\n PDI/DreamWorks was one of the first animation studios to adopt global illumination in production rendering. Concurrently, we have also been developing and applying physically based shading principles to improve the consistency and realism of our material models, while balancing the need for intuitive artistic control required for feature animations.\n In this talk, we will start by presenting the evolution of physically based shading in our films. Then we will present some fundamental principles with respect to importance sampling and energy conservation in our BSDF framework with a pragmatic and efficient approach to transimssion fresnel modeling. Finally, we will present our new set of physically plausible production shaders for our new path tracer, which includes our new hard surface shader, our approach to material layering and some new developments in fabric and glitter shading.\n Volumetric Skin and Fabric Shading at Framestore\n Nathan Walster\n Recent advances in shading have led to the use of free-path sampling to better solve complex light transport within volumetric materials. In this talk, we describe how we have implemented these ideas and techniques within a production environment, their application on recent shows---such as Guardians of the Galaxy Vol. 2 and Alien: Covenant---and the effect this has had on artists' workflow within our studio.\n Practical Multilayered Materials in Call of Duty: Infinite Warfare\n Michał Drobot\n This talk presents a practical approach to multilayer, physically based surface rendering, specifically optimized for Forward+ rendering pipelines. The presented pipeline allows for the creation of complex surface by decomposing them into different mediums, each represented by a simple BRDF/BSSRDF and set of simple, physical macro properties, such as thickness, scattering and absorption. The described model is explained via practical examples of common multilayer materials such as car paint, lacquered wood, ice, and semi-translucent plastics. Finally, the talk describes intrinsic implementation details for achieving a low performance budget for 60 Hz titles as well as supporting multiple rendering modes: opaque, alpha blend, and refractive blend.\n Pixar's Foundation for Materials: PxrSurface and PxrMarschnerHair\n Christophe Hery and Junyi Ling\n Pixar's Foundation Materials, PxrSurface and PxrMarschnerHair, began shipping with RenderMan 21.\n PxrSurface is the standard surface shader developed in the studio for Finding Dory and used more recently for Cars 3 and Coco. This shader contains nine lobes that cover the entire gamut of surface materials for these two films: diffuse, three specular, iridescence, fuzz, subsurface, single scatter and a glass lobe. Each of these BxDF lobes is energy conserving, but conservation is not enforced between lobes on the surface level. We use parameter layering methods to feed a PxrSurface with pre-layered material descriptions. This simultaneously allows us the flexibility of a multilayered shading pipeline together with efficient and consistent rendering behavior.\n We also implemented our individual BxDFs with the latest state-of-the-art techniques. For example, our three specular lobes can be switched between Beckmann and GGX modes. Many compound materials have multiple layers of specular; these lobes interact with each other modulated by the Fresnel effect of the clearcoat layer. We also leverage LEADR mapping to recreate sub-displacement micro features such as metal flakes and clearcoat scratches.\n Another example is that PxrSurface ships with Jensen, d'Eon and Burley diffusion profiles. Additionally, we implemented a novel subsurface model using path-traced volumetric scattering, which represents a significant advancement. It captures zero and single scattering events of subsurface scattering implicit to the path-tracing algorithm. The user can adjust the phase-function of the scattering events and change the extinction profiles, and it also comes with standardized color inversion features for intuitive albedo input. To the best of our knowledge, this is the first commercially available rendering system to model these features and the rendering cost is comparable to classic diffusion subsurface scattering models.\n PxrMarschnerHair implements Marschner's seminal hair illumination model with importance sampling. We also account for the residual energy left after the R, TT, TRT and glint lobes, through a fifth diffuse lobe. We show that this hair surface shader can reproduce dark and blonde hair effectively in a path-traced production context. Volumetric scattering from fiber to fiber changes the perceived hue and saturation of a groom, so we also provide a color inversion scheme to invert input albedos, such that the artistic inputs are straightforward and intuitive.\n Revisiting Physically Based Shading at Imageworks\n Christopher Kulla and Alejandro Conty\n Two years ago, the rendering and shading groups at Sony Imageworks embarked on a project to review the structure of our physically based shaders in an effort to simplify their implementation, improve quality and pave the way to take advantage of future improvements in light transport algorithms.\n We started from classic microfacet BRDF building blocks and investigated energy conservation and artist friendly parametrizations. We continued by unifying volume rendering and subsurface scattering algorithms and put in place a system for medium tracking to improve the setup of nested media. Finally, from all these building blocks, we rebuilt our artist-facing shaders with a simplified interface and a more flexible layering approach through parameter blending.\n Our talk will discuss the details of our various building blocks, what worked and what didn't, as well as some future research directions we are still interested in exploring.", "title": "" }, { "docid": "67544e71b45acb84923a3db84534a377", "text": "The precision of point-of-gaze (POG) estimation during a fixation is an important factor in determining the usability of a noncontact eye-gaze tracking system for real-time applications. The objective of this paper is to define and measure POG fixation precision, propose methods for increasing the fixation precision, and examine the improvements when the methods are applied to two POG estimation approaches. To achieve these objectives, techniques for high-speed image processing that allow POG sampling rates of over 400 Hz are presented. With these high-speed POG sampling rates, the fixation precision can be improved by filtering while maintaining an acceptable real-time latency. The high-speed sampling and digital filtering techniques developed were applied to two POG estimation techniques, i.e., the highspeed pupil-corneal reflection (HS P-CR) vector method and a 3-D model-based method allowing free head motion. Evaluation on the subjects has shown that when operating at 407 frames per second (fps) with filtering, the fixation precision for the HS P-CR POG estimation method was improved by a factor of 5.8 to 0.035deg (1.6 screen pixels) compared to the unfiltered operation at 30 fps. For the 3-D POG estimation method, the fixation precision was improved by a factor of 11 to 0.050deg (2.3 screen pixels) compared to the unfiltered operation at 30 fps.", "title": "" }, { "docid": "b83a0341f2ead9c72eda4217e0f31ea2", "text": "Time-series classification has attracted considerable research attention due to the various domains where time-series data are observed, ranging from medicine to econometrics. Traditionally, the focus of time-series classification has been on short time-series data composed of a few patterns exhibiting variabilities, while recently there have been attempts to focus on longer series composed of multiple local patrepeating with an arbitrary irregularity. The primary contribution of this paper relies on presenting a method which can detect local patterns in repetitive time-series via fitting local polynomial functions of a specified degree. We capture the repetitiveness degrees of time-series datasets via a new measure. Furthermore, our method approximates local polynomials in linear time and ensures an overall linear running time complexity. The coefficients of the polynomial functions are converted to symbolic words via equi-area discretizations of the coefficients' distributions. The symbolic polynomial words enable the detection of similar local patterns by assigning the same word to similar polynomials. Moreover, a histogram of the frequencies of the words is constructed from each time-series' bag of words. Each row of the histogram enables a new representation for the series and symbolizes the occurrence of local patterns and their frequencies. In an experimental comparison against state-of-the-art baselines on repetitive datasets, our method demonstrates significant improvements in terms of prediction accuracy.", "title": "" }, { "docid": "89d8241f39d1f7a71f283669f4162a82", "text": "Wound field synchronous generators (WFSG) are the standard electromechanical converter for back-up and utility scale power generation. Maintenance costs may be minimized by adopting noncontact or “brushless” technologies to replace sliding slip ring connections for rotor field excitation. This paper presents a brushless excitation approach using ceramic insulated sleeve (journal) bearings with oil lubrication to form capacitively coupled slip rings, in contrast to more traditional inductive brushless exciters and rotary transformers. This capacitive power transfer (CPT) approach exhibits advantages including low weight, low volume, and has a relatively simple construction using off-the-shelf components. Analysis, design, and prototype construction of the CPT system are presented. Experimental results demonstrate that 1.7 nF of capacitive coupling transfers 340 W to the rotor field winding of a 10 kW 208 V WFSG. Voltage regulation of a WFSG is demonstrated during steady state and 1 per unit load step changes yielding a NEMA-MG1 class G2 rating.", "title": "" }, { "docid": "ddc2675904b26e1c023d6f605751251d", "text": "The impacts of high technology industries have been growing increasingly to technological innovations and global economic developments, while the concerns in sustainability are calling for facilitating green materials and cleaner production in the industrial value chains. Today’s manufacturing companies are not striving for individual capacities but for the effective working with green supply chains. However, in addition to environmental and social objectives, cost and economic feasibility has become one of the most critical success factors for improving supply chain management with green component procurement collaboration, especially for the electronics OEM (original equipment manufacturing) companies whose procurement costs often make up a very high proportion of final product prices. This paper presents a case study from the systems perspective by using System Dynamics simulation analysis and statistical validations with empirical data. Empirical data were collected from Taiwanese manufacturing chains—among the world’s largest manufacturing clusters of high technology components and products—and their global green suppliers to examine the benefits of green component procurement collaborations in terms of shared costs and improved shipping time performance. Two different supply chain collaboration models, from multi-layer ceramic capacitor (MLCC) and universal serial bus 3.0 (USB 3.0) cable procurements, were benchmarked and statistically validated. The results suggest that the practices of collaborative planning for procurement quantity and accurate fulfillment by suppliers are significantly related to cost effectiveness and shipping time efficiency. Although the price negotiation of upstream raw materials for the collaborative suppliers has no statistically significant benefit to the shipping time efficiency, the shared cost reduction of component procurement is significantly positive for supply chain collaboration among green manufacturers. Managerial implications toward sustainable supply chain management were also discussed.", "title": "" } ]
scidocsrr
92afe2fe13ad061cff27b3e41c983e8b
Search-and-Compute on Encrypted Data
[ { "docid": "4dd92ba65219dd86b5de47a4170d4fda", "text": "In a private database query system a client issues queries to a database server and obtains the results without learning anything else about the database and without the server learning the query. In this work we develop tools for implementing private database queries using somewhat-homomorphic encryption (SWHE), that is, using an encryption system that supports only limited computations on encrypted data. We show that a polynomial encoding of the database enables an efficient implementation of several different query types using only low-degree computations on ciphertexts. Specifically, we study two separate settings that offer different privacy/efficiency tradeoffs. In the basic client-server setting, we show that additive homomorphisms are sufficient to implement conjunction and threshold queries. We obtain further efficiency improvements using an additive system that also supports a single homomorphic multiplication on ciphertexts. This implementation hides all aspects of the client’s query from the server, and reveals nothing to the client on non-matching records. To improve performance further we turn to the “Isolated-Box” architecture of De Cristofaro et al. In that architecture the role of the database server is split between two non-colluding parties. The server encrypts and pre-processes the n-record database and also prepares an encrypted inverted index. The server sends the encrypted database and inverted index to a proxy, but keeps the decryption keys to itself. The client interacts with both server and proxy for every query and privacy holds as long as the server and proxy do not collude. We show that using a system that supports only log(n) multiplications on encrypted data it is possible to implement conjunctions and threshold queries efficiently. We implemented our protocols for the Isolated-box architecture using the somewhat homomorphic encryption system by Brakerski, and compared it to a simpler implementation that only uses Paillier’s additively homomorphic encryption system. The implementation using somewhat homomorphic encryption was able to handle a query with a few thousand matches out of a million-record database in just a few minutes, far outperforming the implementation using additively homomorphic encryption.", "title": "" } ]
[ { "docid": "56c75286b03f3a643ef0ade81edd9254", "text": "The data saturation problem in Landsat imagery is well recognized and is regarded as an important factor resulting in inaccurate forest aboveground biomass (AGB) estimation. However, no study has examined the saturation values for different vegetation types such as coniferous and broadleaf forests. The objective of this study is to estimate the saturation values in Landsat imagery for different vegetation types in a subtropical region and to explore approaches to improving forest AGB estimation. Landsat Thematic Mapper imagery, digital elevation model data, and field measurements in Zhejiang province of Eastern China were used. Correlation analysis and scatterplots were first used to examine specific spectral bands and their relationships with AGB. A spherical model was then used to quantitatively estimate the saturation value of AGB for each vegetation type. A stratification of vegetation types and/or slope aspects was used to determine the potential to improve AGB estimation performance by developing a specific AGB estimation model for each category. Stepwise regression analysis based on Landsat spectral signatures and textures using grey-level co-occurrence matrix (GLCM) was used to develop AGB estimation models for different scenarios: non-stratification, stratification based on either vegetation types, slope aspects, or the combination of vegetation types and slope aspects. The results indicate that pine forest and mixed forest have the highest AGB saturation values (159 and 152 Mg/ha, respectively), Chinese fir and broadleaf forest have lower saturation values (143 and 123 Mg/ha, respectively), and bamboo forest and shrub have the lowest saturation values (75 and 55 Mg/ha, respectively). The stratification based on either vegetation types or slope aspects provided smaller root mean squared errors (RMSEs) than non-stratification. The AGB estimation models based on stratification of both vegetation types and slope aspects provided the most accurate estimation with the smallest RMSE of 24.5 Mg/ha. Relatively low AGB (e.g., less than 40 Mg/ha) sites resulted in overestimation and higher AGB (e.g., greater than 140 Mg/ha) sites resulted in underestimation. The smallest RMSE was obtained when AGB was 80–120 Mg/ha. This research indicates the importance of stratification in mitigating the data saturation problem, thus improving AGB estimation.", "title": "" }, { "docid": "3de4922096e2d9bf04ba1ea89b3b3ff1", "text": "Events of various sorts make up an important subset of the entities relevant not only in knowledge representation but also in natural language processing and numerous other fields and tasks. How to represent these in a homogeneous yet expressive, extensive, and extensible way remains a challenge. In this paper, we propose an approach based on FrameBase, a broad RDFS-based schema consisting of frames and roles. The concept of a frame, which is a very general one, can be considered as subsuming existing definitions of events. This ensures a broad coverage and a uniform representation of various kinds of events, thus bearing the potential to serve as a unified event model. We show how FrameBase can represent events from several different sources and domains. These include events from a specific taxonomy related to organized crime, events captured using schema.org, and events from DBpedia.", "title": "" }, { "docid": "685b1471c334c941507ac12eb6680872", "text": "Purpose – The concept of ‘‘knowledge’’ is presented in diverse and sometimes even controversial ways in the knowledge management (KM) literature. The aim of this paper is to identify the emerging views of knowledge and to develop a framework to illustrate the interrelationships of the different knowledge types. Design/methodology/approach – This paper is a literature review to explore how ‘‘knowledge’’ as a central concept is presented and understood in a selected range of KM publications (1990-2004). Findings – The exploration of the knowledge landscape showed that ‘‘knowledge’’ is viewed in four emerging and complementary ways. The ontological, epistemological, commodity, and community views of knowledge are discussed in this paper. The findings show that KM is still a young discipline and therefore it is natural to have different, sometimes even contradicting views of ‘‘knowledge’’ side by side in the literature. Practical implications – These emerging views of knowledge could be seen as opportunities for researchers to provide new contributions. However, this diversity and complexity call for careful and specific clarification of the researchers’ standpoint, for a clear statement of their views of knowledge. Originality/value – This paper offers a framework as a compass for researchers to help their orientation in the confusing and ever changing landscape of knowledge.", "title": "" }, { "docid": "9ff9c4f3da3ad64bef92a574895ea93f", "text": "Computer simulation of a CA1 hippocampal pyramidal neuron is used to estimate the effects of synaptic and spatio-temporal noise on such a cell's ability to accurately calculate the weighted sum of its inputs, presented in the form of transient patterns of activity. Comparison is made between the pattern recognition capability of the cell in the presence of this noise and that of a noise-free computing unit in an artificial neural network model of a heteroassociative memory. Spatio-temporal noise due to the spatial distribution of synaptic input and quantal variance at each synapse degrade the accuracy of signal integration and consequently reduce pattern recognition performance in the cell. It is shown here that a certain degree of asynchrony in action potential arrival at different synapses, however, can improve signal integration. Signal amplification by voltage-dependent conductances in the dendrites, provided by synaptic NMDA receptors, and sodium and calcium ion channels, also improves integration and pattern recognition. While the biological sources of noise are significant when few patterns are stored in the associative memory of which the cell is a part, when large numbers of patterns are stored the noise from the other stored patterns comes to dominate the pattern recognition process. In this situation, the pattern recognition performance of the pyramidal cell is within a factor of two of that of the computing unit in the artificial neural network model.", "title": "" }, { "docid": "0347347608738b966ca4a62dfb37fdd7", "text": "Much of the work done in the field of tangible interaction has focused on creating tools for learning; however, in many cases, little evidence has been provided that tangible interfaces offer educational benefits compared to more conventional interaction techniques. In this paper, we present a study comparing the use of a tangible and a graphical interface as part of an interactive computer programming and robotics exhibit that we designed for the Boston Museum of Science. In this study, we have collected observations of 260 museum visitors and conducted interviews with 13 family groups. Our results show that visitors found the tangible and the graphical systems equally easy to understand. However, with the tangible interface, visitors were significantly more likely to try the exhibit and significantly more likely to actively participate in groups. In turn, we show that regardless of the condition, involving multiple active participants leads to significantly longer interaction times. Finally, we examine the role of children and adults in each condition and present evidence that children are more actively involved in the tangible condition, an effect that seems to be especially strong for girls.", "title": "" }, { "docid": "4fa1b8c7396e636216d0c1af0d1adf15", "text": "Modern smartphone platforms have millions of apps, many of which request permissions to access private data and resources, like user accounts or location. While these smartphone platforms provide varying degrees of control over these permissions, the sheer number of decisions that users are expected to manage has been shown to be unrealistically high. Prior research has shown that users are often unaware of, if not uncomfortable with, many of their permission settings. Prior work also suggests that it is theoretically possible to predict many of the privacy settings a user would want by asking the user a small number of questions. However, this approach has neither been operationalized nor evaluated with actual users before. We report on a field study (n=72) in which we implemented and evaluated a Personalized Privacy Assistant (PPA) with participants using their own Android devices. The results of our study are encouraging. We find that 78.7% of the recommendations made by the PPA were adopted by users. Following initial recommendations on permission settings, participants were motivated to further review and modify their settings with daily “privacy nudges.” Despite showing substantial engagement with these nudges, participants only changed 5.1% of the settings previously adopted based on the PPA’s recommendations. The PPA and its recommendations were perceived as useful and usable. We discuss the implications of our results for mobile permission management and the design of personalized privacy assistant solutions.", "title": "" }, { "docid": "3531f08daf40f88915eadba307252c6f", "text": "Although some crowdsourcing aggregation models have been introduced to aggregate noisy crowd labels, these models mostly consider single-option (i.e. discrete) crowd labels as the input variables, and are not compatible with multi-option (i.e. non-deterministic) crowd data. In this paper, we propose a novel joint generative-discriminative aggregation model, which is able to efficiently deal with both single-option and multi-option crowd labels. Considering the confidence of workers for each option as the input data, we first introduce a new discriminative aggregation model, called Constrained Weighted Majority Voting (CWMVL1), which improves the performance of majority voting method. CWMVL1 considers flexible reliability parameters for crowd workers, employs L1-norm loss function to deal with noisy crowd data, and includes optimization constraints to have probabilistic outputs. We prove that our object is convex, and derive an efficient optimization algorithm. Moreover, we integrate the discriminative CWMVL1 model with a generative model, resulting in a powerful joint aggregation model. Combination of these sub-models is obtained in a probabilistic framework rather than a heuristic way. For our joint model, we derive an efficient optimization algorithm, which alternates between updating the parameters and estimating the potential true labels. Experimental results indicate that the proposed aggregation models achieve superior or competitive results in comparison with the state-of-the-art models on single-option and multi-option crowd datasets, while having faster convergence rates and more reliable predictions.", "title": "" }, { "docid": "3aaa2d625cddd46f1a7daddbb3e2b23d", "text": "Text summarization is the task of shortening text documents but retaining their overall meaning and information. A good summary should highlight the main concepts of any text document. Many statistical-based, location-based and linguistic-based techniques are available for text summarization. This paper has described a novel hybrid technique for automatic summarization of Punjabi text. Punjabi is an official language of Punjab State in India. There are very few linguistic resources available for Punjabi. The proposed summarization system is hybrid of conceptual-, statistical-, location- and linguistic-based features for Punjabi text. In this system, four new location-based features and two new statistical features (entropy measure and Z score) are used and results are very much encouraging. Support vector machine-based classifier is also used to classify Punjabi sentences into summary and non-summary sentences and to handle imbalanced data. Synthetic minority over-sampling technique is applied for over-sampling minority class data. Results of proposed system are compared with different baseline systems, and it is found that F score, Precision, Recall and ROUGE-2 score of our system are reasonably well as compared to other baseline systems. Moreover, summary quality of proposed system is comparable to the gold summary.", "title": "" }, { "docid": "28cb5dee0fc91bd9c99ede29c6df0f9b", "text": "A crowdsourcing system, such as the Amazon Mechanical Turk (AMT), provides a platform for a large number of questions to be answered by Internet workers. Such systems have been shown to be useful to solve problems that are difficult for computers, including entity resolution, sentiment analysis, and image recognition. In this paper, we investigate the online task assignment problem: Given a pool of n questions, which of the k questions should be assigned to a worker? A poor assignment may not only waste time and money, but may also hurt the quality of a crowdsourcing application that depends on the workers' answers. We propose to consider quality measures (also known as evaluation metrics) that are relevant to an application during the task assignment process. Particularly, we explore how Accuracy and F-score, two widely-used evaluation metrics for crowdsourcing applications, can facilitate task assignment. Since these two metrics assume that the ground truth of a question is known, we study their variants that make use of the probability distributions derived from workers' answers. We further investigate online assignment strategies, which enables optimal task assignments. Since these algorithms are expensive, we propose solutions that attain high quality in linear time. We develop a system called the Quality-Aware Task Assignment System for Crowdsourcing Applications (QASCA) on top of AMT. We evaluate our approaches on five real crowdsourcing applications. We find that QASCA is efficient, and attains better result quality (of more than 8% improvement) compared with existing methods.", "title": "" }, { "docid": "5fa1f3a35f4051293fea54c169471335", "text": "We describe an algorithm for navigation state estimation during planetary descent to enable precision landing. The algorithm automatically produces 2D-to-3D correspondences between descent images and a surface map and 2D-to-2D correspondences through a sequence of descent images. These correspondences are combined with inertial measurements in an extended Kalman filter that estimates lander position, velocity and attitude as well as the time varying biases of the inertial measurements. The filter tightly couples inertial and camera measurements in a resource-adaptive and hence real-time capable fashion. Results from a sounding rocket test, covering the dynamic profile of typical planetary landing scenarios, show estimation errors of magnitude 0.16 m/s in velocity and 6.4 m in position at touchdown. These results vastly improve current state of the art and meet the requirements of future planetary exploration missions.", "title": "" }, { "docid": "e64c8560d798b891f9addde71e473ff8", "text": "The use of phosphate solubilizing bacteria as inoculants simultaneously increases P uptake by the plant and crop yield. Strains from the genera Pseudomonas, Bacillus and Rhizobium are among the most powerful phosphate solubilizers. The principal mechanism for mineral phosphate solubilization is the production of organic acids, and acid phosphatases play a major role in the mineralization of organic phosphorous in soil. Several phosphatase-encoding genes have been cloned and characterized and a few genes involved in mineral phosphate solubilization have been isolated. Therefore, genetic manipulation of phosphate-solubilizing bacteria to improve their ability to improve plant growth may include cloning genes involved in both mineral and organic phosphate solubilization, followed by their expression in selected rhizobacterial strains. Chromosomal insertion of these genes under appropriate promoters is an interesting approach.", "title": "" }, { "docid": "e61095bf820e170c8c8d6f2212142962", "text": "Today, even low-cost FPGAs provide far more computing power than DSPs. Current FPGAs have dedicated multipliers and even DSP multiply/accumulate (MAC) blocks that enable signals to be processed with clock speeds in excess of 550 MHz. Until now, however, these capabilities were rarely needed in audio signal processing. A serial implementation of an audio algorithm working in the kilohertz range uses exactly the same resources required for processing signals in the three-digit megahertz range. Consequently, programmable logic components such as PLDs or FPGAs are rarely used for processing low-frequency signals. After all, the parallel processing of mathematical operations in hardware is of no benefit when compared to an implementation based on classical DSPs; the sampling rates are so low that most serial DSP implementations are more than adequate. In fact, audio applications are characterized by such a high number of multiplications that they previously could", "title": "" }, { "docid": "7543a1640df29e6f4adef7c49a54fe2a", "text": "In this work, we investigate a new ranking method for principal component analysis (PCA). Instead of sorting the principal components in decreasing order of the corresponding eigenvalues, we propose the idea of using the discriminant weights given by separating hyperplanes to select among the principal components the most discriminant ones. The method is not restricted to any particular probability density function of the sample groups because it can be based on either a parametric or non-parametric separating hyperplane approach. In addition, the number of meaningful discriminant directions is not limited to the number of groups, providing additional information to understand group differences extracted from high-dimensional problems. To evaluate the discriminant principal components, separation tasks have been performed using face images and three different databases. Our experimental results have shown that the principal components selected by the separating hyperplanes allow robust reconstruction and interpretation of the data, as well as higher recognition rates using less linear features in situations where the differences between the sample groups are subtle and consequently most difficult for the standard and state-of-the-art PCA selection methods. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6259b792713367345374d437f37abdb0", "text": "SWOT analysis (Strength, Weakness, Opportunity, and Threat) has been in use since the 1960s as a tool to assist strategic planning in various types of enterprises including those in the construction industry. Whilst still widely used, the approach has called for improvements to make it more helpful in strategic management. The project described in this paper aimed to study whether the process to convert a SWOT analysis into a strategic plan could be assisted with some simple rationally quantitative model, as an augmented SWOT analysis. By utilizing the mathematical approaches including the quantifying techniques, the “Maximum Subarray” method, and fuzzy mathematics, one or more Heuristic Rules are derived from a SWOT analysis. These Heuristic Rules bring into focus the most influential factors concerning a strategic planning situation, and thus inform strategic analysts where particular consideration should be given. A case study conducted in collaboration with a Chinese international construction company showed that the new SWOT approach is more helpful to strategic planners. The paper provides an augmented SWOT analysis approach for strategists to conduct strategic planning in the construction industry. It also contributes fresh insights into strategic planning by introducing rationally analytic processes to improve the SWOT analysis.", "title": "" }, { "docid": "9eb395e921d7b923109db723e30d9b47", "text": "This paper presents the modeling, analysis, design and experimental validation of a robust sensorless control method for permanent magnet synchronous motor (PMSM) based on extended Kalman filter (EKF). A real-time PMSM and its EKF models in the MATLAB/Simulink simulation environment are developed. The position/speed sensorless control scheme along with the power electronic circuitry is modeled. The performance of the proposed control is assessed and verified for different types of dynamic and static torque loads. The robustness of the sensorless method is demonstrated by starting the motor with different rotor initial positions. The proposed EKF speed/position estimation method is also proved insensitive to the PMSM parameter variations. Proper operations of this EKF based sensorless control method for a high-speed permanent magnet synchronous machine are verified experimentally in the research laboratory at Honeywell.", "title": "" }, { "docid": "bbb9ac7170663ce653ec9cb40db8695b", "text": "What we believe to be a novel three-dimensional (3D) phase unwrapping algorithm is proposed to unwrap 3D wrapped-phase volumes. It depends on a quality map to unwrap the most reliable voxels first and the least reliable voxels last. The technique follows a discrete unwrapping path to perform the unwrapping process. The performance of this technique was tested on both simulated and real wrapped-phase maps. And it is found to be robust and fast compared with other 3D phase unwrapping algorithms.", "title": "" }, { "docid": "883182582b2b62694e725e323e3eb88c", "text": "With increasing use of mobile devices, photo sharing services are experiencing greater popularity. Aside from providing storage, photo sharing services enable bandwidth-efficient downloads to mobile devices by performing server-side image transformations (resizing, cropping). On the flip side, photo sharing services have raised privacy concerns such as leakage of photos to unauthorized viewers and the use of algorithmic recognition technologies by providers. To address these concerns, we propose a privacy-preserving photo encoding algorithm that extracts and encrypts a small, but significant, component of the photo, while preserving the remainder in a public, standards-compatible, part. These two components can be separately stored. This technique significantly reduces the accuracy of automated detection and recognition on the public part, while preserving the ability of the provider to perform server-side transformations to conserve download bandwidth usage. Our prototype privacy-preserving photo sharing system, P3, works with Facebook, and can be extended to other services as well. P3 requires no changes to existing services or mobile application software, and adds minimal photo storage overhead.", "title": "" }, { "docid": "56206ddb152c3a09f3e28a6ffa703cd6", "text": "This chapter introduces the operation and control of a Doubly-fed Induction Generator (DFIG) system. The DFIG is currently the system of choice for multi-MW wind turbines. The aerodynamic system must be capable of operating over a wide wind speed range in order to achieve optimum aerodynamic efficiency by tracking the optimum tip-speed ratio. Therefore, the generator’s rotor must be able to operate at a variable rotational speed. The DFIG system therefore operates in both suband super-synchronous modes with a rotor speed range around the synchronous speed. The stator circuit is directly connected to the grid while the rotor winding is connected via slip-rings to a three-phase converter. For variable-speed systems where the speed range requirements are small, for example ±30% of synchronous speed, the DFIG offers adequate performance and is sufficient for the speed range required to exploit typical wind resources. An AC-DC-AC converter is included in the induction generator rotor circuit. The power electronic converters need only be rated to handle a fraction of the total power – the rotor power – typically about 30% nominal generator power. Therefore, the losses in the power electronic converter can be reduced, compared to a system where the converter has to handle the entire power, and the system cost is lower due to the partially-rated power electronics. This chapter will introduce the basic features and normal operation of DFIG systems for wind power applications basing the description on the standard induction generator. Different aspects that will be described include their variable-speed feature, power converters and their associated control systems, and application issues.", "title": "" }, { "docid": "97af4f8e35a7d773bb85969dd027800b", "text": "For an intelligent transportation system (ITS), traffic incident detection is one of the most important issues, especially for urban area which is full of signaled intersections. In this paper, we propose a novel traffic incident detection method based on the image signal processing and hidden Markov model (HMM) classifier. First, a traffic surveillance system was set up at a typical intersection of china, traffic videos were recorded and image sequences were extracted for image database forming. Second, compressed features were generated through several image processing steps, image difference with FFT was used to improve the recognition rate. Finally, HMM was used for classification of traffic signal logics (East-West, West-East, South-North, North-South) and accident of crash, the total correct rate is 74% and incident recognition rate is 84%. We believe, with more types of incident adding to the database, our detection algorithm could serve well for the traffic surveillance system.", "title": "" } ]
scidocsrr
fd3dd0a7109d3fbbeabaae1653cb61df
Active Bias: Training a More Accurate Neural Network by Emphasizing High Variance Samples
[ { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" } ]
[ { "docid": "d51d916e4529a2dc92aa2f2809270f17", "text": "In this paper, we propose to learn word embeddings based on the recent fixedsize ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation. We use FOFE to fully encode the left and right context of each word in a corpus to construct a novel word-context matrix, which is further weighted and factorized using truncated SVD to generate low-dimension word embedding vectors. We have evaluated this alternative method in encoding word-context statistics and show the new FOFE method has a notable effect on the resulting word embeddings. Experimental results on several popular word similarity tasks have demonstrated that the proposed method outperforms many recently popular neural prediction methods as well as the conventional SVD models that use canonical count based techniques to generate word context matrices.", "title": "" }, { "docid": "42127829aebaaaa4a4ac6c7e9417feaf", "text": "The study was to compare treatment preference, efficacy, and tolerability of sildenafil citrate (sildenafil) and tadalafil for treating erectile dysfunction (ED) in Chinese men naοve to phosphodiesterase 5 (PDE5) inhibitor therapies. This multicenter, randomized, open-label, crossover study evaluated whether Chinese men with ED preferred 20-mg tadalafil or 100-mg sildenafil. After a 4 weeks baseline assessment, 383 eligible patients were randomized to sequential 20-mg tadalafil per 100-mg sildenafil or vice versa for 8 weeks respectively and then chose which treatment they preferred to take during the 8 weeks extension. Primary efficacy was measured by Question 1 of the PDE5 Inhibitor Treatment Preference Questionnaire (PITPQ). Secondary efficacy was analyzed by PITPQ Question 2, the International Index of Erectile Function (IIEF) erectile function (EF) domain, sexual encounter profile (SEP) Questions 2 and 3, and the Drug Attributes Questionnaire. Three hundred and fifty men (91%) completed the randomized treatment phase. Two hundred and forty-two per 350 (69.1%) patients preferred 20-mg tadalafil, and 108/350 (30.9%) preferred 100-mg sildenafil (P < 0.001) as their treatment in the 8 weeks extension. Ninety-two per 242 (38%) patients strongly preferred tadalafil and 37/108 (34.3%) strongly the preferred sildenafil. The SEP2 (penetration), SEP3 (successful intercourse), and IIEF-EF domain scores were improved in both tadalafil and sildenafil treatment groups. For patients who preferred tadalafil, getting an erection long after taking the medication was the most reported reason for tadalafil preference. The only treatment-emergent adverse event reported by > 2% of men was headache. After tadalafil and sildenafil treatments, more Chinese men with ED naοve to PDE5 inhibitor preferred tadalafil. Both sildenafil and tadalafil treatments were effective and safe.", "title": "" }, { "docid": "46291c5a7fafd089c7729f7bc77ae8b7", "text": "This paper proposes a new system for offline writer identification and writer verification. The proposed method uses GMM supervectors to encode the feature distribution of individual writers. Each supervector originates from an individual GMM which has been adapted from a background model via a maximum-a-posteriori step followed by mixing the new statistics with the background model. We show that this approach improves the TOP-1 accuracy of the current best ranked methods evaluated at the ICDAR-2013 competition dataset from 95.1% [13] to 97.1%, and from 97.9% [11] to 99.2% at the CVL dataset, respectively. Additionally, we compare the GMM supervector encoding with other encoding schemes, namely Fisher vectors and Vectors of Locally Aggregated Descriptors.", "title": "" }, { "docid": "f3438e53b03c4d70c430d0213c24e59c", "text": "The modern education system for manual therapy, including massage therapy, physiotherapy, osteopathy, and chiropractic works on the acceptance that manipulation of tissues for scars and adhesions is of therapeutic value. Manual therapy in its various forms is now being introduced in a growing number of integrated health clinics, and is accepted by many to be a valuable addition to allopathic care. Much of the manual therapy literature is innately conceptual, sometimes for lack of data, often using outdated concepts that may have been dispelled by modern science’s ability to accurately measure and observe cellular level mechanisms. Manual therapy education splits systems and practices into various camps, including central versus peripheral nervous system, visceral work, and connective tissue/fascia manipulation. There are various conjectures as to how mechanical forces affect these different systems of anatomy, despite almost no directly relevant science. The manipulation of fascia as a technique is relatively recent in manual therapy history, and has been separated out by various schools of thought. Patients seem to receive benefit from the treatments they receive. Interest in the mechanisms at the cellular level that govern wound healing and understanding the mechanisms of pain are critical to the practice and diagnostic reasoning of tissue manipulation for scars and adhesions.", "title": "" }, { "docid": "df53b371cd82f51812d16e24eb5ce40e", "text": "Implementations of map-reduce are being used to perform many operations on very large data. We examine strategies for joining several relations in the map-reduce environment. Our new approach begins by identifying the “map-key,” the set of attributes that identify the Reduce process to which a Map process must send a particular tuple. Each attribute of the map-key gets a “share,” which is the number of buckets into which its values are hashed, to form a component of the identifier of a Reduce process. Relations have their tuples replicated in limited fashion, the degree of replication depending on the shares for those map-key attributes that are missing from their schema. We study the problem of optimizing the shares, given a fixed number of Reduce processes. An algorithm for detecting and fixing problems where a variable is mistakenly included in the map-key is given. Then, we consider two important special cases: chain joins and star joins. In each case, we are able to determine the map-key and determine the shares that yield the least replication. While the method we propose is not always superior to the conventional way of using map-reduce to implement joins, there are some important cases involving large-scale data where our method wins, including: 1) analytic queries in which a very large fact table is joined with smaller dimension tables, and 2) queries involving paths through graphs with high out-degree, such as the Web or a social network.", "title": "" }, { "docid": "c528ea5c333c63504b1221825597a382", "text": "This paper introduces our domain independent approach to “free generation” from single RDF triples without using any domain dependent knowledge. Our approach is developed based on our argument that RDF representations carry rich linguistic information, which can be used to achieve readable domain independent generation. In order to examine to what extent our argument is realistic, we carry out an evaluation experiment, which is the first evaluation of this kind of domain independent generation in the field.", "title": "" }, { "docid": "681360f20a662f439afaaa022079f7c0", "text": "We present a multi-PC/camera system that can perform 3D reconstruction and ellipsoids fitting of moving humans in real time. The system consists of five cameras. Each camera is connected to a PC which locally extracts the silhouettes of the moving person in the image captured by the camera. The five silhouette images are then sent, via local network, to a host computer to perform 3D voxel-based reconstruction by an algorithm called SPOT. Ellipsoids are then used to fit the reconstructed data. By using a simple and user-friendly interface, the user can display and observe, in real time and from any view-point, the 3D models of the moving human body. With a rate of higher than 15 frames per second, the system is able to capture nonintrusively sequence of human motions.", "title": "" }, { "docid": "e4ebb6d41393f0bd672f1f5985af98b4", "text": "We propose a new framework to rank image attractiveness using a novel pairwise deep network trained with a large set of side-by-side multi-labeled image pairs from a web image index. The judges only provide relative ranking between two images without the need to directly assign an absolute score, or rate any predefined image attribute, thus making the rating more intuitive and accurate. We investigate a deep attractiveness rank net (DARN), a combination of deep convolutional neural network and rank net, to directly learn an attractiveness score mean and variance for each image and the underlying criteria the judges use to label each pair. The extension of this model (DARN-V2) is able to adapt to individual judge's personal preference. We also show the attractiveness of search results are significantly improved by using this attractiveness information in a real commercial search engine. We evaluate our model against other state-of-the-art models on our side-by-side web test data and another public aesthetic data set. With much less judgments (1M vs 50M), our model outperforms on side-by-side labeled data, and is comparable on data labeled by absolute score.", "title": "" }, { "docid": "a0e243a0edd585303a84fda47b1ae1e1", "text": "Generative Adversarial Networks (GANs) have shown great promise recently in image generation. Training GANs for language generation has proven to be more difficult, because of the non-differentiable nature of generating text with recurrent neural networks. Consequently, past work has either resorted to pre-training with maximum-likelihood or used convolutional networks for generation. In this work, we show that recurrent neural networks can be trained to generate text with GANs from scratch using curriculum learning, by slowly teaching the model to generate sequences of increasing and variable length. We empirically show that our approach vastly improves the quality of generated sequences compared to a convolutional baseline. 1", "title": "" }, { "docid": "45477e67e1ddc589fde6d989254e4c32", "text": "Existing process mining approaches are able to tolerate a certain degree of noise in process log. However, processes that contain infrequent paths, multiple (nested) parallel branches, or have been changed in an ad-hoc manner, still pose challenges. For such cases, process mining typically returns “spaghetti-models”, that are hardly usable even as a starting point for process (re-)design. In this paper, we address these challenges by introducing data transformation and pre-processing steps that improve and ensure the quality of mined models for existing process mining approaches. We propose the concept of semantic log purging, i.e., the cleaning of logs based on domain specific constraints utilizing knowledge that typically complements processes. Furthermore we demonstrate the feasibility and effectiveness of the approach based on a case study in the higher education domain. We think that semantic log purging will enable process mining to yield better results, thus giving process (re-)designers a valuable tool.", "title": "" }, { "docid": "32a964bd36770b8c50a0e74289f4503b", "text": "Several competing human behavior models have been proposed to model and protect against boundedly rational adversaries in repeated Stackelberg security games (SSGs). However, these existing models fail to address three main issues which are extremely detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries’ past actions (“attacks on targets”), they fail to take into account adversaries’ future adaptation based on successes or failures of these past actions. Second, they assume that sufficient data in the initial rounds will lead to a reliable model of the adversary. However, our analysis reveals that the issue is not the amount of data, but that there just is not enough of the attack surface exposed to the adversary to learn a reliable model. Third, current leading approaches have failed to include probability weighting functions, even though it is well known that human beings’ weighting of probability is typically nonlinear. The first contribution of this paper is a new human behavior model, SHARP, which mitigates these three limitations as follows: (i) SHARP reasons based on success or failure of the adversary’s past actions on exposed portions of the attack surface to model adversary adaptiveness; (ii) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary’s lack of exposure to enough of the attack surface; and (iii) SHARP integrates a non-linear probability weighting function to capture the adversary’s true weighting of probability. Our second contribution is a first “longitudinal study” – at least in the context of SSGs – of competing models in settings involving repeated interaction between the attacker and the defender. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP.", "title": "" }, { "docid": "72b3fbd8c7f03a4ad1e36ceb5418cba6", "text": "The risk for multifactorial diseases is determined by risk factors that frequently apply across disorders (universal risk factors). To investigate unresolved issues on etiology of and individual’s susceptibility to multifactorial diseases, research focus should shift from single determinant-outcome relations to effect modification of universal risk factors. We present a model to investigate universal risk factors of multifactorial diseases, based on a single risk factor, a single outcome measure, and several effect modifiers. Outcome measures can be disease overriding, such as clustering of disease, frailty and quality of life. “Life course epidemiology” can be considered as a specific application of the proposed model, since risk factors and effect modifiers of multifactorial diseases typically have a chronic aspect. Risk factors are categorized into genetic, environmental, or complex factors, the latter resulting from interactions between (multiple) genetic and environmental factors (an example of a complex factor is overweight). The proposed research model of multifactorial diseases assumes that determinant-outcome relations differ between individuals because of modifiers, which can be divided into three categories. First, risk-factor modifiers that determine the effect of the determinant (such as factors that modify gene-expression in case of a genetic determinant). Second, outcome modifiers that determine the expression of the studied outcome (such as medication use). Third, generic modifiers that determine the susceptibility for multifactorial diseases (such as age). A study to assess disease risk during life requires phenotype and outcome measurements in multiple generations with a long-term follow up. Multiple generations will also enable to separate genetic and environmental factors. Traditionally, representative individuals (probands) and their first-degree relatives have been included in this type of research. We put forward that a three-generation design is the optimal approach to investigate multifactorial diseases. This design has statistical advantages (precision, multiple-informants, separation of non-genetic and genetic familial transmission, direct haplotype assessment, quantify genetic effects), enables unique possibilities to study social characteristics (socioeconomic mobility, partner preferences, between-generation similarities), and offers practical benefits (efficiency, lower non-response). LifeLines is a study based on these concepts. It will be carried out in a representative sample of 165,000 participants from the northern provinces of the Netherlands. LifeLines will contribute to the understanding of how universal risk factors are modified to influence the individual susceptibility to multifactorial diseases, not only at one stage of life but cumulatively over time: the lifeline.", "title": "" }, { "docid": "7ef2f4a771aa0d1724127c97aa21e1ea", "text": "This paper demonstrates the efficient use of Internet of Things for the traditional agriculture. It shows the use of Arduino and ESP8266 based monitored and controlled smart irrigation systems, which is also cost-effective and simple. It is beneficial for farmers to irrigate there land conveniently by the application of automatic irrigation system. This smart irrigation system has pH sensor, water flow sensor, temperature sensor and soil moisture sensor that measure respectively and based on these sensors arduino microcontroller drives the servo motor and pump. Arduino received the information and transmitted with ESP8266 Wi-Fi module wirelessly to the website through internet. This transmitted information is monitor and control by using IOT. This enables the remote control mechanism through a secure internet web connection to the user. A website has been prepared which present the actual time values and reference values of various factors needed by crops. Users can control water pumps and sprinklers through the website and keep an eye on the reference values which will help the farmer increase production with quality crops.", "title": "" }, { "docid": "5016ab74ebd9c1359e8dec80ee220bcf", "text": "The possibility of communication between plants was proposed nearly 20 years ago, although previous demonstrations have suffered from methodological problems and have not been widely accepted. Here we report the first rigorous, experimental evidence demonstrating that undamaged plants respond to cues released by neighbors to induce higher levels of resistance against herbivores in nature. Sagebrush plants that were clipped in the field released a pulse of an epimer of methyl jasmonate that has been shown to be a volatile signal capable of inducing resistance in wild tobacco. Wild tobacco plants with clipped sagebrush neighbors had increased levels of the putative defensive oxidative enzyme, polyphenol oxidase, relative to control tobacco plants with unclipped sagebrush neighbors. Tobacco plants near clipped sagebrush experienced greatly reduced levels of leaf damage by grasshoppers and cutworms during three field seasons compared to unclipped controls. This result was not caused by an altered light regime experienced by tobacco near clipped neighbors. Barriers to soil contact between tobacco and sagebrush did not reduce the difference in leaf damage although barriers that blocked air contact negated the effect.", "title": "" }, { "docid": "c83fe84cacf01b155705a10dd5885743", "text": "For decades, humans have dreamed of making cars that could drive themselves, so that travel would be less taxing, and the roads safer for everyone. Toward this goal, we have made strides in motion planning algorithms for autonomous cars, using a powerful new computing tool, the parallel graphics processing unit (GPU). We propose a novel five-dimensional search space formulation that includes both spatial and temporal dimensions, and respects the kinematic and dynamic constraints on a typical automobile. With this formulation, the search space grows linearly with the length of the path, compared to the exponential growth of other methods. We also propose a parallel search algorithm, using the GPU to tackle the curse of dimensionality directly and increase the number of plans that can be evaluated by an order of magnitude compared to a CPU implementation. With this larger capacity, we can evaluate a dense sampling of plans combining lateral swerves and accelerations that represent a range of effective responses to more on-road driving scenarios than have previously been addressed in the literature. We contribute a cost function that evaluates many aspects of each candidate plan, ranking them all, and allowing the behavior of the vehicle to be fine-tuned by changing the ranking. We show that the cost function can be changed on-line by a behavioral planning layer to express preferred vehicle behavior without the brittleness induced by top-down planning architectures. Our method is particularly effective at generating robust merging behaviors, which have traditionally required a delicate and failure-prone coordination between multiple planning layers. Finally, we demonstrate our proposed planner in a variety of on-road driving scenarios in both simulation and on an autonomous SUV, and make a detailed comparison with prior work.", "title": "" }, { "docid": "a2de7f207b83cb04e5c9b1b3187c730e", "text": "Motivated by recent trends in online advertising and advancements made by online publishers, we consider a new form of contract which allows advertisers to specify the number of unique individuals that should see their ad (reach), and the minimum number of times each individual should be exposed (frequency). We develop an optimization framework that aims for minimal under-delivery and proper spread of each campaign over its targeted demographics. As well, we introduce a pattern-based delivery mechanism which allows us to integrate a variety of interesting features into a website’s ad allocation optimization problem which have not been possible before. For example, our approach allows publishers to implement any desired pacing of ads over time at the user level or control the number of competing brands seen by each individual. We develop a two-phase algorithm that employs column generation in a hierarchical scheme with three parallelizable components. Numerical tests with real industry data show that our algorithm produces high-quality solutions and has promising run-time and scalability. Several extensions of the model are presented, e.g., to account for multiple ad positions on the webpage, or randomness in the website visitors’ arrival process.", "title": "" }, { "docid": "ac529a455bcefa58abafa6c679bec2b4", "text": "This article presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.", "title": "" }, { "docid": "da44395c7a0949f57e9c04169dc99581", "text": "“Which test provides the better measurement of intelligence, the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) or the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV)?” is an important question to professional psychologists; however, it has become a critical issue in Atkins cases wherein courts are often presented with divergent Full-Scale IQ (FSIQ) scores on the WAIS-III and WAIS-IV. In these instances, courts are required to render a decision stating which test provided the better measure of an inmate’s intellectual functioning. This study employed structural equation modeling to empirically determine which instrument; the WAIS-III or the WAIS-IV, provides the better measure of intelligence via the FSIQ score. Consistent with the publisher’s representation of intellectual functioning, the results from this study indicate the WAIS-IV provides superior measurement, scoring, and structural models to measure FSIQ when compared to the WAIS-III.", "title": "" }, { "docid": "7197dbee035c62044a93d4e60762e3ea", "text": "The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-theart results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.1", "title": "" }, { "docid": "5a589c7beb17374e17c766634d822a80", "text": "Images now come in different forms – color, near-infrared, depth, etc. – due to the development of special and powerful cameras in computer vision and computational photography. Their cross-modal correspondence establishment is however left behind. We address this challenging dense matching problem considering structure variation possibly existing in these image sets and introduce new model and solution. Our main contribution includes designing the descriptor named robust selective normalized cross correlation (RSNCC) to establish dense pixel correspondence in input images and proposing its mathematical parameterization to make optimization tractable. A computationally robust framework including global and local matching phases is also established. We build a multi-modal dataset including natural images with labeled sparse correspondence. Our method will benefit image and vision applications that require accurate image alignment.", "title": "" } ]
scidocsrr
e3c788b135dd72371bf618d1c1c9db6f
A Prestressed Soft Gripper: Design, Modeling, Fabrication, and Tests for Food Handling
[ { "docid": "03a8635fcb64117d5a2a6f890c2b03b5", "text": "This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.", "title": "" } ]
[ { "docid": "260c12152d9bd38bd0fde005e0394e17", "text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.", "title": "" }, { "docid": "211484ec722f4df6220a86580d7ecba8", "text": "The widespread use of vision-based surveillance systems has inspired many research efforts on people localization. In this paper, a series of novel image transforms based on the vanishing point of vertical lines is proposed for enhancement of the probabilistic occupancy map (POM)-based people localization scheme. Utilizing the characteristic that the extensions of vertical lines intersect at a vanishing point, the proposed transforms, based on image or ground plane coordinate system, aims at producing transformed images wherein each standing/walking person will have an upright appearance. Thus, the degradation in localization accuracy due to the deviation of camera configuration constraint specified can be alleviated, while the computation efficiency resulted from the applicability of integral image can be retained. Experimental results show that significant improvement in POM-based people localization for more general camera configurations can indeed be achieved with the proposed image transforms.", "title": "" }, { "docid": "b8ca0badcbd28507655245bae05638a1", "text": "In this work we investigate building indoor location based applications for a mobile augmented reality system. We believe that augmented reality is a natural interface to visualize spacial information such as position or direction of locations and objects for location based applications that process and present information based on the user’s position in the real world. To enable such applications we construct an indoor tracking system that covers a substantial part of a building. It is based on visual tracking of fiducial markers enhanced with an inertial sensor for fast rotational updates. To scale such a system to a whole building we introduce a space partitioning scheme to reuse fiducial markers throughout the environment. Finally we demonstrate two location based applications built upon this facility, an indoor navigation aid and a library search applica-", "title": "" }, { "docid": "278ec426c504828f1f13e1cf1ce50e39", "text": "Information retrieval, IR, is the science of extracting information from documents. It can be viewed in a number of ways: logical, probabilistic and vector space models are some of the most important. In this book, the author, one of the leading researchers in the area, shows how these three views can be combined in one mathematical framework, the very one used to formulate the general principles of quantum mechanics. Using this framework, van Rijsbergen presents a new theory for the foundations of IR, in particular a new theory of measurement. He shows how a document can be represented as a vector in Hilbert space, and the document’s relevance by an Hermitian operator. All the usual quantum-mechanical notions, such as uncertainty, superposition and observable, have their IR-theoretic analogues. But the approach is more than just analogy: the standard theorems can be applied to address problems in IR, such as pseudo-relevance feedback, relevance feedback and ostensive retrieval. The relation with quantum computing is also examined. To help keep the book self-contained, appendices with background material on physics and mathematics are included, and each chapter ends with some suggestions for further reading. This is an important book for all those working in IR, AI and natural language processing.", "title": "" }, { "docid": "076bd454466c5e7c08253e2e951d40c3", "text": "In recent years, the advancement in modern technologies has not only resulted in an explosion of huge data sets being captured and recorded in different fields, but also given rise to concerns in the security and protection of data during storage, transmission, processing, and access. The blockchain is a distributed ledger that records transactions in a secure, flexible, verifiable and permanent way. Transactions in a blockchain can be an exchange of an asset, the execution of the terms of a smart contract, or an update to a record. In this paper, we have developed a blockchain access control ecosystem that gives asset owners the sovereign right to effectively manage access control of large data sets and protect against data breaches. The Linux Foundation's Hyperledger Fabric blockchain is used to run the business network while the Hyperledger composer modeling tool is used to implement the smart contracts or transaction processing functions that run on the blockchain network. Keywords—Blockchain, access control, data sharing, privacy, data protection, hyperledger, distributed ledger technology, smart contract", "title": "" }, { "docid": "beff5f56387f416f4bd55fde61203200", "text": "Nutrition assessment is an essential component of the Nutrition Care Process and Model (NCPM), as it is the initial step in developing a comprehensive evaluation of the client’s nutrition history. A comprehensive nutrition assessment requires the ability to observe, interpret, analyze, and infer data to diagnose nutrition problems. This practice paper provides insight into the process by which critical thinking skills are utilized by both registered dietitian nutritionists (RDNs) and dietetic technicians, registered (DTRs).", "title": "" }, { "docid": "b92598714526b62738696de4fe6dbb9d", "text": "Despite the widespread interest in the topic of organizational citizenship behaviors (OCBs), little empirical research has tested the fundamental assumption that these forms of behavior improve the effectiveness of work groups or organizations in which they are exhibited. In the present study, the effects of OCBs on the quantity and quality of the performance of 218 people working in 40 machine crews in a paper mill located in the Northeastern United States were examined. The results indicate that helping behavior and sportsmanship had significant effects on performance quantity and that helping behavior had a significant impact on performance quality. However, civic virtue had no effect on either performance measure.", "title": "" }, { "docid": "f0da127d64aa6e9c87d4af704f049d07", "text": "The introduction of the blue-noise spectra-high-frequency white noise with minimal energy at low frequencies-has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray. The blue-noise model, however, does not directly translate to printing with multiple ink intensities. New multilevel printing and display technologies require the development of corresponding quantization algorithms for continuous tone images, namely multitoning. In order to define an optimal distribution of multitone pixels, this paper develops the theory and design of multitone, blue-noise dithering. Here, arbitrary multitone dot patterns are modeled as a layered superposition of stack-constrained binary patterns. Multitone blue-noise exhibits minimum energy at low frequencies and a staircase-like, ascending, spectral pattern at higher frequencies. The optimum spectral profile is described by a set of principal frequencies and amplitudes whose calculation requires the definition of a spectral coherence structure governing the interaction between patterns of dots of different intensities. Efficient algorithms for the generation of multitone, blue-noise dither patterns are also introduced.", "title": "" }, { "docid": "1d4c583da38709054140152fe328294c", "text": "This paper analyzes the assumptions of the decision making models in the context of artificial general intelligence (AGI). It is argued that the traditional approaches, exemplified by decision theory and reinforcement learning, are inappropriate for AGI, because their fundamental assumptions on available knowledge and resource cannot be satisfied here. The decision making process in the AGI system NARS is introduced and compared with the traditional approaches. It is concluded that realistic decision-making models must acknowledge the insufficiency of knowledge and resources, and make assumptions accordingly. 1 Formalizing decision-making An AGI system needs to make decisions from time to time. To achieve its goals, the system must execute certain operations, which are chosen from all possible operations, according to the system’s beliefs on the relations between the operations and the goals, as well as their applicability to the current situation. On this topic, the dominating normative model is decision theory [12, 3]. According to this model, “decision making” means to choose one action from a finite set of actions that is applicable at the current state. Each action leads to some consequent states according to a probability distribution, and each consequent state is associated with a utility value. The rational choice is the action that has the maximum expected utility (MEU). When the decision extends from single actions to action sequences, it is often formalized as a Markov decision process (MDP), where the utility function is replaced by a reward value at each state, and the optimal policy, as a collection of decisions, is the one that achieves the maximum expected total reward (usually with a discount for future rewards) in the process. In AI, the best-known approach toward solving this problem is reinforcement learning [4, 16], which uses various algorithms to approach the optimal policy. Decision theory and reinforcement learning have been widely considered as setting the theoretical foundation of AI research [11], and the recent progress in deep learning [9] is increasing the popularity of these models. In the current AGI research, an influential model in this tradition is AIXI [2], in which reinforcement learning is combined with Solomonoff induction [15] to provide the probability values according to algorithmic complexity of the hypotheses used in prediction. 2 P. Wang and P. Hammer Every formal model is based on some fundamental assumptions to encapsulate certain beliefs about the process to be modeled, so as to provide a coherent foundation for the conclusions derived in the model, and also to set restrictions on the situations where the model can be legally applied. In the following, four major assumptions of the above models are summarized. The assumption on task: The task of “decision making” is to select the best action from all applicable actions at each state of the process. The assumption on belief: The selection is based on the system’s beliefs about the actions, represented as probability distributions among their consequent states. The assumption on desire: The selection is guided by the system’s desires measured by a (utility or reward) value function defined on states, and the best action is the one that with the maximum expectation. The assumption on budget: The system can afford the computational resources demanded by the selection algorithm. There are many situations where the above assumptions can be reasonably accepted, and the corresponding models have been successfully applied [11, 9]. However, there are reasons to argue that artificial general intelligence (AGI) is not such a field, and there are non-trivial issues on each of the four assumptions. Issues on task: For a general-purpose system, it is unrealistic to assume that at any state all the applicable actions are explicitly listed. Actually, in human decision making the evaluation-choice step is often far less significant than diagnosis or design [8]. Though in principle it is reasonable to assume the system’s actions are recursively composed of a set of basic operations, decision makings often do not happen at the level of basic operations, but at the level of composed actions, where there are usually infinite possibilities. So decision making is often not about selection, but selective composition. Issues on belief: For a given action, the system’s beliefs about its possible consequences are not necessarily specified as a probability distribution among following states. Actions often have unanticipated consequences, and even the beliefs about the anticipated consequences usually do not fully specify a “state” of the environment or the system itself. Furthermore, the system’s beliefs about the consequences may be implicitly inconsistent, so does not correspond to a probability distribution. Issues on desire: Since an AGI system typically has multiple goals with conflicting demands, usually no uniform value function can evaluate all actions with respect to all goals within limited time. Furthermore, the goals in an AGI system change over time, and it is unrealistic to expect such a function to be defined on all future states. How desirable a situation is should be taken as part of the problem to be solved, rather than as a given. Issues on budget: An AGI is often expected to handle unanticipated problems in real time with various time requirements. In such a situation, even if the decision-making algorithms are considered as of “tractable” computational complexity, they may still fail to satisfy the requirement on response time in the given situation. Assumptions of Decision-Making Models in AGI 3 None of the above issues is completely unknown, and various attempts have been proposed to extend the traditional models [13, 22, 1], though none of them has rejected the four assumptions altogether. Instead, a typical attitude is to take decision theory and reinforcement learning as idealized models for the actual AGI systems to approximate, as well as to be evaluated accordingly [6]. What this paper explores is the possibility of establishing normative models of decision making without accepting any of the above four assumptions. In the following, such a model is introduced, then compared with the classical models. 2 Decision making in NARS The decision-making model to be introduced comes from the NARS project [17, 18, 20]. The objective of this project is to build an AGI in the framework of a reasoning system. Decision making is an important function of the system, though it is not carried out by a separate algorithm or module, but tightly interwoven with other functions, such as reasoning and learning. Limited by the paper length, the following description only briefly covers the aspects of NARS that are directly related to the current discussion. NARS is designed according to the theory that “intelligence” is the ability for a system to be adaptive while working with insufficient knowledge and resources, that is, the system must depend on finite processing capability, make real-time responses, open to unanticipated problems and events, and learn from its experience. Under this condition, it is impossible for the truth-value of beliefs of the system to be defined either in the model-theoretic style as the extent of agreement with the state of affairs, or in the proof-theoretic style as the extent of agreement with the given axioms. Instead, it is defined as the extent of agreement with the available evidence collected from the system’s experience. Formally, for a given statement S, the amount of its positive evidence and negative evidence are defined in an idealized situation and measured by amounts w and w−, respectively, and the total amount evidence is w = w + w−. The truth-value of S is a pair of real numbers, 〈f, c〉, where f , frequency, is w/w so in [0, 1], and c, confidence, is w/(w + 1) so in (0, 1). Therefore a belief has a form of “S〈f, c〉”. As the content of belief, statement S is a sentence in a formal language Narsese. Each statement expresses a relation among a few concepts. For the current discussion, it is enough to know that a statement may have various internal structures for different types of conceptual relation, and can contain other statements as components. In particular, implication statement P ⇒ Q and equivalence statement P ⇔ Q express “If P then Q” and “P if and only if Q”, respectively, where P and Q are statements themselves. As a reasoning system, NARS can carry out three types of inference tasks: Judgment. A judgment also has the form of “S〈f, c〉”, and represents a piece of new experience to be absorbed into the system’s beliefs. Besides adding it into memory, the system may also use it to revise or update the previous beliefs on statement S, as well as to derive new conclusions using various inference rules (including deduction, induction, abduction, analogy, etc.). Each 4 P. Wang and P. Hammer rule uses a truth-value function to calculate the truth-value of the conclusion according to the evidence provided by the premises. For example, the deduction rule can take P 〈f1, c1〉 and P ⇒ Q 〈f2, c2〉 to derive Q〈f, c〉, where 〈f, c〉 is calculated from 〈f1, c1〉 and 〈f2, c2〉 by the truth-value function for deduction. There is also a revision rule that merges distinct bodies of evidence on the same statement to produce more confident judgments. Question. A question has the form of “S?”, and represents a request for the system to find the truth-value of S according to its current beliefs. A question may contain variables to be instantiated. Besides looking in the memory for a matching belief, the system may also use the inference rules backwards to generate derived questions, whose answers will lead to answers of the original question. For example, from question Q? and belief P ⇒ Q 〈f, c〉, a new question P? can be proposed by the deduction rule. When there are multiple candidate answers, a choice rule ", "title": "" }, { "docid": "2d665c73e1ba2c4a178f5b5f90b1b79f", "text": "Foldem, a novel method of rapid fabrication of objects with multi-material properties is presented. Our specially formulated Foldem sheet allows users to fabricate and easily assemble objects with rigid, bendable, and flexible properties using a standard laser-cutter. The user begins by creating his designs in a vector graphics software package. A laser cutter is then used to fabricate the design by selectively ablating/vaporizing one or more layers of the Foldem sheet to achieve the desired physical properties for each joint. Herein the composition of the Foldem sheet, as well as various design considerations taken into account while building and designing the method, are described. Sample objects made with Foldem are demonstrated, each showcasing the unique attributes of Foldem. Additionally, a novel method for carefully calibrating a laser cutter for precise ablation is presented.", "title": "" }, { "docid": "e529a3f7b8241e0774afd892c4551389", "text": "The evolution of modern wireless communications systems has increased dramatically the demand for antennas, capable to be embedded in portable, or not, devices which serve a wireless land mobile or terrestrial-satellite network. With time and requirements, these devices become smaller in size and hence the antennas required for transmit and receive signals have also to be smaller and lightweight. As a matter of fact, microstrip antennas can meet these requirements. As they are lightweight and have low profile it is feasible them to be structured conformally to the mounting hosts. Moreover, they are easy fabricated, have low cost and are easy integrated into arrays or into microwave printed circuits. So, they are attractive choices for the above mentioned type of applications. For all that, the design of a microstrip antenna is not always an easy problem and the antenna designer is faced with difficulties coming from a) the inherent disadvantages of a printed resonant antenna element, for example the narrow impedance bandwidth, and b) the various requirements of the specific applications, which concern the operation of the radiating element, and can not be satisfied by a printed scheme with an ordinary configuration. For example, it would be demanded, the microstrip element to have gain characteristics that potentially incommensurate to its size or/and frequency bandwidth greater than the element could give, taking into account that it operates as a resonant cavity. Moreover, the rapid development in the field of Land Mobile Telephony as well as in the field of Wireless Local Area Networks(WLANs) demands devices capable to operate in more than one frequency bands. So the design of a printed antenna with intend to conform to multiple communications protocols, for example the IEEE 802.11b/g, in the band of 2.4GHz, and the IEEE 802.11a at 5.3GHz and 5.8GHz, would be a difficult task but at the same time a challenge for the designer. Counting in the above the possibility the device, and so the antenna, to serve terrestrial and also satellite navigation systems the problem of the antenna design is even more complicated. In this chapter techniques will be analysed, to design microstrip antennas that combine the attributes mentioned above which make them suitable for modern communications applications. Specific examples will be also presented for every case.", "title": "" }, { "docid": "ddef188a971d53c01d242bb9198eac10", "text": "State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based approach that can utilize only the slot description in context without the need for any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea of this paper is to leverage the encoding of the slot names and descriptions within a multi-task deep learned slot filling model, to implicitly align slots across domains. The proposed approach is promising for solving the domain scaling problem and eliminating the need for any manually annotated data or explicit schema alignment. Furthermore, our experiments on multiple domains show that this approach results in significantly better slot-filling performance when compared to using only in-domain data, especially in the low data regime.", "title": "" }, { "docid": "6b5ccfaa3a8d7bbc7ad9325e008bd0a3", "text": "Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents Multi-Entity Bayesian Networks (MEBN), a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical first-order theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first-order theory.", "title": "" }, { "docid": "a6f9dc745682efb871e338b63c0cbbc4", "text": "Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.", "title": "" }, { "docid": "f93ee5c9de994fa07e7c3c1fe6e336d1", "text": "Sleep bruxism (SB) is characterized by repetitive and coordinated mandible movements and non-functional teeth contacts during sleep time. Although the etiology of SB is controversial, the literature converges on its multifactorial origin. Occlusal factors, smoking, alcoholism, drug usage, stress, and anxiety have been described as SB trigger factors. Recent studies on this topic discussed the role of neurotransmitters on the development of SB. Thus, the purpose of this study was to detect and quantify the urinary levels of catecholamines, specifically of adrenaline, noradrenaline and dopamine, in subjects with SB and in control individuals. Urine from individuals with SB (n = 20) and without SB (n = 20) was subjected to liquid chromatography. The catecholamine data were compared by Mann–Whitney’s test (p ≤ 0.05). Our analysis showed higher levels of catecholamines in subjects with SB (adrenaline = 111.4 µg/24 h; noradrenaline = 261,5 µg/24 h; dopamine = 479.5 µg/24 h) than in control subjects (adrenaline = 35,0 µg/24 h; noradrenaline = 148,7 µg/24 h; dopamine = 201,7 µg/24 h). Statistical differences were found for the three catecholamines tested. It was concluded that individuals with SB have higher levels of urinary catecholamines.", "title": "" }, { "docid": "5cbc93a9844fcd026a1705ee031c6530", "text": "Accompanying the rapid urbanization, many developing countries are suffering from serious air pollution problem. The demand for predicting future air quality is becoming increasingly more important to government's policy-making and people's decision making. In this paper, we predict the air quality of next 48 hours for each monitoring station, considering air quality data, meteorology data, and weather forecast data. Based on the domain knowledge about air pollution, we propose a deep neural network (DNN)-based approach (entitled DeepAir), which consists of a spatial transformation component and a deep distributed fusion network. Considering air pollutants' spatial correlations, the former component converts the spatial sparse air quality data into a consistent input to simulate the pollutant sources. The latter network adopts a neural distributed architecture to fuse heterogeneous urban data for simultaneously capturing the factors affecting air quality, e.g. meteorological conditions. We deployed DeepAir in our AirPollutionPrediction system, providing fine-grained air quality forecasts for 300+ Chinese cities every hour. The experimental results on the data from three-year nine Chinese-city demonstrate the advantages of DeepAir beyond 10 baseline methods. Comparing with the previous online approach in AirPollutionPrediction system, we have 2.4%, 12.2%, 63.2% relative accuracy improvements on short-term, long-term and sudden changes prediction, respectively.", "title": "" }, { "docid": "58fbd637f7c044aeb0d55ba015c70f61", "text": "This paper outlines an innovative software development that utilizes Quality of Service (QoS) and parallel technologies in Cisco Catalyst Switches to increase the analytical performance of a Network Intrusion Detection and Protection System (NIDPS) when deployed in highspeed networks. We have designed a real network to present experiments that use a Snort NIDPS. Our experiments demonstrate the weaknesses of NIDPSes, such as inability to process multiple packets and propensity to drop packets in heavy traffic and high-speed networks without analysing them. We tested Snort’s analysis performance, gauging the number of packets sent, analysed, dropped, filtered, injected, and outstanding. We suggest using QoS configuration technologies in a Cisco Catalyst 3560 Series Switch and parallel Snorts to improve NIDPS performance and to reduce the number of dropped packets. Our results show that our novel configuration improves performance.", "title": "" }, { "docid": "0ea07af19fc199f6a9909bd7df0576a1", "text": "Detection of overlapping communities in complex networks has motivated recent research in the relevant fields. Aiming this problem, we propose a Markov dynamics based algorithm, called UEOC, which means, “unfold and extract overlapping communities”. In UEOC, when identifying each natural community that overlaps, a Markov random walk method combined with a constraint strategy, which is based on the corresponding annealed network (degree conserving random network), is performed to unfold the community. Then, a cutoff criterion with the aid of a local community function, called conductance, which can be thought of as the ratio between the number of edges inside the community and those leaving it, is presented to extract this emerged community from the entire network. The UEOC algorithm depends on only one parameter whose value can be easily set, and it requires no prior knowledge on the hidden community structures. The proposed UEOC has been evaluated both on synthetic benchmarks and on some real-world networks, and was compared with a set of competing algorithms. Experimental result has shown that UEOC is highly effective and efficient for discovering overlapping communities.", "title": "" }, { "docid": "0cf7ebc02a8396a615064892d9ee6f22", "text": "With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily. 1 Evolution of Ontology Evolution Acceptance of ontologies as an integral part of knowledge-intensive applications has been growing steadily. The word ontology became a recognized substrate in fields outside the computer science, from bioinformatics to intelligence analysis. With such acceptance, came the use of ontologies in industrial systems and active publishing of ontologies on the (Semantic) Web. More and more often, developing an ontology is not a project undertaken by a single person or a small group of people in a research laboratory, but rather it is a large project with numerous participants, who are often geographically distributed, where the resulting ontologies are used in production environments with paying customers counting on robustness and reliability of the system. The Protégé ontology-development environment1 has become a widely used tool for developing ontologies, with more than 50,000 registered users. The Protégé group works closely with some of the tool’s users and we have a continuous stream of requests from them on the features that they would like to have supported in terms of managing and developing ontologies collaboratively. The configurations for collaborative development differ significantly however. For instance, Perot Systems2 uses a client–server mode of Protégé with multiple users simultaneously accessing the same copy of the ontology on the server. The NCI Center for Bioinformatics, which develops the NCI The1 http://protege.stanford.edu 2 http://www.perotsystems.com saurus3 has a different configuration: a baseline version of the Thesaurus is published regularly and between the baselines, multiple editors work asynchronously on their own versions. At the end of the cycle, the changes are reconciled. In the OBO project,4 ontology developers post their ontologies on a sourceforge site, using the sourceforge version-control system to publish successive versions. In addition to specific requirements to support each of these collaboration models, users universally request the ability to annotate their changes, to hold discussions about the changes, to see the change history with respective annotations, and so on. When developing tool support for all the different modes and tasks in the process of ontology evolution, we started with separate and unrelated sets of Protégé plugins that supported each of the collaborative editing modes. This approach, however, was difficult to maintain; besides, we saw that tools developed for one mode (such as change annotation) will be useful in other modes. Therefore, we have developed a single unified framework that is flexible enough to work in either synchronous or asynchronous mode, in those environments where Protégé and our plugins are used to track changes and in those environments where there is no record of the change steps. At the center of the system is a Change and Annotation Ontology (CHAO) with instances recording specific changes and meta-information about them (author, timestamp, annotations, acceptance status, etc.). When Protégé and its change-management plugins are used for ontology editing, these tools create CHAO instances as a side product of the editing process. Otherwise, the CHAO instances are created from a structural diff produced by comparing two versions. The CHAO instances then drive the user interface that displays changes between versions to a user, allows him to accept and reject changes, to view concept history, to generate a new baseline, to publish a history of changes that other applications can use, and so on. This paper makes the following contributions: – analysis and categorization of different scenarios for ontology maintenance and evolution and their functional requirements (Section 2) – development of a comprehensive solution that addresses most of the functional requirements from the different scenarios in a single unified framework (Section 3) – implementation of the solution as a set of open-source Protégé plugins (Section 4) 2 Ontology-Evolution Scenarios and Tasks We will now discuss different scenarios for ontology maintenance and evolution, their attributes, and functional requirements.", "title": "" }, { "docid": "94277962f6f6e0667851600e851e7dad", "text": "Self-introduction of foreign bodies along the penile shaft has been reported in several ethnic and social groups, mainly in Asia, and recently has been described in Europe. We present the case of a 34-year-old homeless Russian immigrant who had an abdominal CT performed during an emergency department visit. On the CT scan, several hyperdense, well-demarcated subcutaneous nodules along the penile shaft were noted. Following a focused history and physical examination, the nodules were found to represent artificial foreign bodies made of glass, which were self-introduced by the patient in order to allegedly increase the pleasure of sexual partners. Penile nodules may be a manifestation of diverse pathological entities including infectious, inflammatory, and neoplastic processes. It is important for the radiologist to be familiar with this social phenomenon and its radiological appearance in order to avoid erroneous diagnosis.", "title": "" } ]
scidocsrr
f753efce12f912b664bee62369f28e8f
Regular Linear Temporal Logic
[ { "docid": "b79b3497ae4987e00129eab9745e1398", "text": "The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.", "title": "" } ]
[ { "docid": "b9f774ccd37e0bf0e399dd2d986f258d", "text": "Predicting the final state of a running process, the remaining time to completion or the next activity of a running process are important aspects of runtime process management. Runtime management requires the ability to identify processes that are at risk of not meeting certain criteria in order to offer case managers decision information for timely intervention. This in turn requires accurate prediction models for process outcomes and for the next process event, based on runtime information available at the prediction and decision point. In this paper, we describe an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event. This is both a novel method in process prediction, which has previously relied on explicit process models in the form of Hidden Markov Models (HMM) or annotated transition systems, and also a novel application for deep learning methods.", "title": "" }, { "docid": "8bcb5def2a0b847a5d0800849443e5bc", "text": "BACKGROUND\nMMPs play a crucial role in the process of cancer invasion and metastasis.\n\n\nMETHODS\nThe influence of NAC on invasion and MMP-9 production of human bladder cancer cell line T24 was investigated using an in vitro invasion assay, gelatin zymography, Western and Northern blot analyses and RT-PCR assays.\n\n\nRESULTS\nTPA increased the number of invading T24 cells through reconstituted basement membrane more than 10-fold compared to basal condition. NAC inhibited TPA-enhanced invasion dose-dependently. TPA increased the MMP-9 production by T24 cells without altering expression of TIMP-1 gene, while NAC suppressed TPA-enhanced production of MMP-9. Neither TPA nor NAC altered TIMP-1 mRNA level in T24 cells. In vitro experiments demonstrated that MMP-9 was directly inhibited by NAC but was not influenced by TPA.\n\n\nCONCLUSION\nNAC limits invasion of T24 human bladder cancer cells by inhibiting the MMP-9 production in addition to a direct inhibition of MMP-9 activity.", "title": "" }, { "docid": "03cea891c4a9fdc77832979267f9dca9", "text": "Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g. a test-and-set instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs),and process saving may be obtained through the use of the CATCH operator. The use of CATCH, in particular, allows an elegant treatment of process saving.\n We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design.", "title": "" }, { "docid": "b66be42a294208ec31d44e57ae434060", "text": "Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian pdfs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian PDFs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.", "title": "" }, { "docid": "8e0e77e78c33225922b5a45fee9b4242", "text": "In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectivity by solving the following two sub-problems. First, we prove that if the radio range is at least twice the sensing range, complete coverage of a convex area implies connectivity among the working set of nodes. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. Based on the optimality conditions, we then devise a decentralized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. The OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. Ns-2 simulations show that OGDC outperforms existing density control algorithms [25, 26, 29] with respect to the number of working nodes needed and network lifetime (with up to 50% improvement), and achieves almost the same coverage as the algorithm with the best result.", "title": "" }, { "docid": "f5d6bfa66e4996bddc6ca1fbecc6c25d", "text": "Internet-connected consumer electronics marketed as smart devices (also known as Internet-of-Things devices) usually lack essential security protection mechanisms. This puts user privacy and security in great danger. One of the essential steps to compromise vulnerable devices is locating them through horizontal port scans. In this paper, we focus on the problem of detecting horizontal port scans in home networks. We propose a software-defined networking (SDN)-based firewall platform that is capable of detecting horizontal port scans. Current SDN implementations (e.g., OpenFlow) do not provide access to packet-level information, which is essential for network security applications, due to performance limitations. Our platform uses FleXight, our proposed new information channel between SDN controller and data path elements to access packet-level information. FleXight uses per-flow sampling and dynamical sampling rate adjustments to provide the necessary information to the controller while keeping the overhead very low. We evaluate our solution on a large real-world packet trace from an ISP and show that our system can identify all attackers and 99% of susceptible victims with only 0.75% network overhead. We also present a detailed usability analysis of our system.", "title": "" }, { "docid": "405bcd759da950aa0d4b8aeb9d8488bb", "text": "Background/Aim: Using machine learning approaches as non-invasive methods have been used recently as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy. This study aims to evaluate different machine learning techniques in prediction of advanced fibrosis by combining the serum bio-markers and clinical information to develop the classification models. Methods: A prospective cohort of 39,567 patients with chronic hepatitis C was divided into two sets—one categorized as mild to moderate fibrosis F0-F2, and the other categorized as advanced fibrosis F3-F4 according to METAVIR score. Decision tree, genetic algorithm, particle swarm optimization, and multi-linear regression models for advanced fibrosis risk prediction were developed. Receiver operating characteristic curve analysis was performed to evaluate the performance of the proposed models. Results: Age, platelet count, AST, and albumin were found to be statistically significant to advanced fibrosis. The machine learning algorithms under study were able to predict advanced fibrosis in patients with HCC with AUROC ranging between 0.73 and 0.76 and accuracy between 66.3 and 84.4 percent. Conclusions: Machine-learning approaches could be used as alternative methods in prediction of the risk of advanced liver fibrosis due to chronic hepatitis C.", "title": "" }, { "docid": "b79fc7fb12d1ac2fc8d6ad3f7123364a", "text": "We characterize the structural and electronic changes during the photoinduced enol-keto tautomerization of 2-(2'-hydroxyphenyl)-benzothiazole (HBT) in a nonpolar solvent (tetrachloroethene). We quantify the redistribution of electronic charge and intramolecular proton translocation in real time by combining UV-pump/IR-probe spectroscopy and quantum chemical modeling. We find that the photophysics of this prototypical molecule involves proton coupled electron transfer (PCET), from the hydroxyphenyl to the benzothiazole rings, resulting from excited state intramolecular proton transfer (ESIPT) coupled to electron transfer through the conjugated double bond linking the two rings. The combination of polarization-resolved mid-infrared spectroscopy of marker modes and time-dependent density functional theory (TD-DFT) provides key insights into the transient structures of the molecular chromophore during ultrafast isomerization dynamics.", "title": "" }, { "docid": "ffb7754f7ecabf639aba0ef257615558", "text": "Novel approaches have taken Augmented Reality (AR) beyond traditional body-worn or hand-held displays, leading to the creation of a new branch of AR: Spatial Augmented Reality (SAR) providing additional application areas. SAR is a rapidly emerging field that uses digital projectors to render virtual objects onto 3D objects in the real space. When mounting digital projectors on robots, this collaboration paves the way for unique Human-Robot Interactions (HRI) that otherwise would not be possible. Adding to robots the capability of projecting interactive Augmented Reality content enables new forms of interactions between humans, robots, and virtual objects, enabling new applications. In this work it is investigated the use of SAR techniques on mobile robots for better enabling this to interact in the future with elderly or injured people during rehabilitation, or with children in the pediatric ward of a hospital.", "title": "" }, { "docid": "dc3495ec93462e68f606246205a8416d", "text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "title": "" }, { "docid": "4c12b827ee445ab7633aefb8faf222a2", "text": "Research shows that speech dereverberation (SD) with Deep Neural Network (DNN) achieves the state-of-the-art results by learning spectral mapping, which, simultaneously, lacks the characterization of the local temporal spectral structures (LTSS) of speech signal and calls for a large storage space that is impractical in real applications. Contrarily, the Convolutional Neural Network (CNN) offers a better modeling ability by considering local patterns and has less parameters with its weights sharing property, which motivates us to employ the CNN for SD task. In this paper, to our knowledge, a Deep Convolutional Encoder-Decoder (DCED) model is proposed for the first time in dealing with the SD task (DCED-SD), where the advantage of the DCED-SD model lies in its powerful LTSS modeling capability via convolutional encoder-decoder layers with smaller storage requirement. By taking the reverberant and anechoic spectrum as training pairs, the proposed DCED-SD is well-trained in a supervised manner with less convergence time. Additionally, the DCED-SD model size is 23 times smaller than the size of DNN-SD model with better performance achieved. By using the simulated and real-recorded data, extensive experiments have been conducted to demonstrate the superiority of DCED-based SD method over the DNN-based SD method under different unseen reverberant conditions.", "title": "" }, { "docid": "728215fb8bb89c7830768e705e5f1c1c", "text": "Human and automated tutors attempt to choose pedagogical activities that will maximize student learning, informed by their estimates of the student's current knowledge. There has been substantial research on tracking and modeling student learning, but significantly less attention on how to plan teaching actions and how the assumed student model impacts the resulting plans. We frame the problem of optimally selecting teaching actions using a decision-theoretic approach and show how to formulate teaching as a partially observable Markov decision process planning problem. This framework makes it possible to explore how different assumptions about student learning and behavior should affect the selection of teaching actions. We consider how to apply this framework to concept learning problems, and we present approximate methods for finding optimal teaching actions, given the large state and action spaces that arise in teaching. Through simulations and behavioral experiments, we explore the consequences of choosing teacher actions under different assumed student models. In two concept-learning tasks, we show that this technique can accelerate learning relative to baseline performance.", "title": "" }, { "docid": "d780db3ec609d74827a88c0fa0d25f56", "text": "Highly automated test vehicles are rare today, and (independent) researchers have often limited access to them. Also, developing fully functioning system prototypes is time and effort consuming. In this paper, we present three adaptions of the Wizard of Oz technique as a means of gathering data about interactions with highly automated vehicles in early development phases. Two of them address interactions between drivers and highly automated vehicles, while the third one is adapted to address interactions between pedestrians and highly automated vehicles. The focus is on the experimental methodology adaptations and our lessons learned.", "title": "" }, { "docid": "2e0fb1af3cb0fdd620144eb93d55ef3e", "text": "A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and difficult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classifier, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created.", "title": "" }, { "docid": "1db14c8cb5434bd28a2d4b3e6b928a9a", "text": "Nested virtualization [1] provides an extra layer of virtualization to enhance security with fairly reasonable performance impact. Usercentric vision of cloud computing gives a high-level of control on the whole infrastructure [2], such as untrusted dom0 [3, 4]. This paper introduces RetroVisor, a security architecture to seamlessly run a virtual machine (VM) on multiple hypervisors simultaneously. We argue that this approach delivers high-availability and provides strong guarantees on multi IaaS infrastructures. The user can perform detection and remediation against potential hypervisors weaknesses, unexpected behaviors and exploits.", "title": "" }, { "docid": "dcef528dbd89bc2c26820bdbe52c3d8d", "text": "The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic anld industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user's query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections.", "title": "" }, { "docid": "af6464d1e51cb59da7affc73977eed71", "text": "Recommender systems leverage both content and user interactions to generate recommendations that fit users' preferences. The recent surge of interest in deep learning presents new opportunities for exploiting these two sources of information. To recommend items we propose to first learn a user-independent high-dimensional semantic space in which items are positioned according to their substitutability, and then learn a user-specific transformation function to transform this space into a ranking according to the user's past preferences. An advantage of the proposed architecture is that it can be used to effectively recommend items using either content that describes the items or user-item ratings. We show that this approach significantly outperforms state-of-the-art recommender systems on the MovieLens 1M dataset.", "title": "" }, { "docid": "a46954af087b37ebfc04866dca1552d2", "text": "An exoskeleton has to be lightweight, compliant, yet powerful to fulfill the demanding task of walking. This imposes a great challenge for the actuator design. Electric motors, by far the most common actuator in robotic, orthotic, and prosthetic devices, cannot provide sufficiently high peak and average power and force/torque output, and they normally require high-ratio, heavy reducer to produce the speeds and high torques needed for human locomotion. Studies on the human muscle-tendon system have shown that muscles (including tendons and ligaments) function as a spring, and by storing energy and releasing it at a proper moment, locomotion becomes more energy efficient. Inspired by the muscle behavior, we propose a novel actuation strategy for exoskeleton design. In this paper, the collected gait data are analyzed to identify the spring property of the human muscle-tendon system. Theoretical optimization results show that adding parallel springs can reduce the peak torque by 66%, 53%, and 48% for hip flexion/extension (F/E), hip abduction/adduction (A/A), and ankle dorsi/plantar flexion (D/PF), respectively, and the rms power by 50%, 45%, and 61%, respectively. Adding a series spring (forming a Series Elastic Actuator, SEA) reduces the peak power by 79% for ankle D/PF, and by 60% for hip A/A. A SEA does not reduce the peak power demand at other joints. The optimization approach can be used for designing other wearable robots as well.", "title": "" }, { "docid": "26d0809a2c8ab5d5897ca43c19fc2b57", "text": "This study outlines a simple 'Profilometric' method for measuring the size and function of the wrinkles. Wrinkle size was measured in relaxed conditions and the representative parameters were considered to be the mean 'Wrinkle Depth', the mean 'Wrinkle Area', the mean 'Wrinkle Volume', and the mean 'Wrinkle Tissue Reservoir Volume' (WTRV). These parameters were measured in the wrinkle profiles under relaxed conditions. The mean 'Wrinkle to Wrinkle Distance', which measures the distance between two adjacent wrinkles, is an accurate indicator of the muscle relaxation level during replication. This parameter, identified as the 'Muscle Relaxation Level Marker', and its reduction are related to increased muscle tone or contraction and vice versa. The mean Wrinkle to Wrinkle Distance is very important in experiments where the effectiveness of an anti-wrinkle preparation is tested. Thus, the correlative wrinkles' replicas, taken during follow up in different periods, are only those that show the same mean Wrinkle to Wrinkle Distance. The wrinkles' functions were revealed by studying the morphological changes of the wrinkles and their behavior during relaxed conditions, under slight increase of muscle tone and under maximum wrinkling. Facial wrinkles are not a single groove, but comprise an anatomical and functional unit (the 'Wrinkle Unit') along with the surrounding skin. This Wrinkle Unit participates in the functions of a central neuro-muscular system of the face responsible for protection, expression, and communication. Thus, the Wrinkle Unit, the superficial musculoaponeurotic system (superficial fascia of the face), the underlying muscles controlled by the CNS and Psyche, are considered to be a 'Functional Psycho-Neuro-Muscular System of the Face for Protection, Expression and Communication'. The three major functions of this system exerted in the central part of the face and around the eyes are: (1) to open and close the orifices (eyes, nose, and mouth), contributing to their functions; (2) to protect the eyes from sun, foreign bodies, etc.; (3) to contribute to facial expression, reflecting emotions (real, pretended, or theatrical) during social communication. These functions are exercised immediately and easily, without any opposition ('Wrinkling Ability') because of the presence of the Wrinkle Unit that gives (a) the site of refolding (the wrinkle is a waiting fold, ready to respond quickly at any moment for any skin mobility need) and (b) the appropriate skin tissue for extension or compression (this reservoir of tissue is measured by the parameter of WTRV). The Wrinkling Ability of a skin area is linked to the wrinkle's functions and can be measured by the parameter of 'Skin Tissue Volume Compressed around the Wrinkle' in mm(3) per 30 mm wrinkle during maximum wrinkling. The presence of wrinkles is a sign that the skin's 'Recovery Ability' has declined progressively with age. The skin's Recovery Ability is linked to undesirable cosmetic effects of ageing and wrinkling. This new Profilometric method can be applied in studies where the effectiveness of anti-wrinkle preparations or the cosmetic results of surgery modalities are tested, as well as in studies focused on the functional physiology of the Wrinkle Unit.", "title": "" }, { "docid": "3e80dc7319f1241e96db42033c16f6b4", "text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.", "title": "" } ]
scidocsrr
a7e605d575ea10d3522df24f9f83f534
Learning travel recommendations from user-generated GPS traces
[ { "docid": "50ef3775f9d18fe368c166cfd3ff2bca", "text": "In many applications that track and analyze spatiotemporal data, movements obey periodic patterns; the objects follow the same routes (approximately) over regular time intervals. For example, people wake up at the same time and follow more or less the same route to their work everyday. The discovery of hidden periodic patterns in spatiotemporal data, apart from unveiling important information to the data analyst, can facilitate data management substantially. Based on this observation, we propose a framework that analyzes, manages, and queries object movements that follow such patterns. We define the spatiotemporal periodic pattern mining problem and propose an effective and fast mining algorithm for retrieving maximal periodic patterns. We also devise a novel, specialized index structure that can benefit from the discovered patterns to support more efficient execution of spatiotemporal queries. We evaluate our methods experimentally using datasets with object trajectories that exhibit periodicity.", "title": "" } ]
[ { "docid": "95c14d030cfaca90cf8e97213c77595a", "text": "Add list of 170 authors and their institutions here. * The authors are indebted to Markus Hauser, University of Zurich, for his thoughtful comments and suggestions relevant to this monograph. 2 ABSTRACT GLOBE is both a research program and a social entity. The GLOBE social entity is a network of 170 social scientists and management scholars from 61 cultures throughout the world, working in a coordinated long-term effort to examine the interrelationships between societal culture, organizational culture and practices, and organizational leadership. The meta-goal of the Global Leadership and Organizational Effectiveness (GLOBE) Research Program is to develop an empirically based theory to describe, understand, and predict the impact of cultural variables on leadership and organizational processes and the effectiveness of these processes. This monograph presents a description of the GLOBE research program and some initial empirical findings resulting from GLOBE research. A central question in this part of the research concerns the extent to which specific leadership attributes and behaviors are universally endorsed as contributing to effective leadership and the extent to which the endorsement of leader attributes and behaviors is culturally contingent. We identified six global leadership dimensions of culturally endorsed implicit theories of leadership (CLTs). Preliminary evidence indicates that these dimensions are significantly correlated with isomorphic dimensions of societal and organizational culture. These findings are consistent with the hypothesis that selected cultural differences strongly influence important ways in which people think about leaders and norms concerning the status, influence, and privileges granted to leaders. The hypothesis that charismatic/value-based leadership would be universally endorsed is strongly supported. Team-oriented leadership is strongly correlated with charismatic/value-based leadership, and also universally endorsed. Humane and participative leadership dimensions are nearly universally endorsed. The endorsement of the remaining global leadership dimensions-self-protective and autonomous leadership vary by culture. 3 We identified 21 specific leader attributes and behaviors that are universally viewed as contributing to leadership effectiveness. Eleven of the specific leader characteristics composing the global charismatic/value-based leadership dimension were among these 21 attributes. Eight specific leader characteristics were universally viewed as impediments to leader effectiveness. We also identified 35 specific leader characteristics that are viewed as contributors in some cultures and impediments in other cultures. We present these, as well as other findings, in more detail in this monograph. A particular strength of the GLOBE research design is the combination of quantitative and qualitative data. Elimination of common method and common source variance is also …", "title": "" }, { "docid": "7fd3b611b0dab164d83f03180cf4789d", "text": "Mobile users are increasingly becoming targets of malware infections and scams. Some platforms, such as Android, are more open than others and are therefore easier to exploit than other platforms. In order to curb such attacks it is important to know how these attacks originate. We take a previously unexplored step in this direction and look for the answer at the interface between mobile apps and the Web. Numerous inapp advertisements work at this interface: when the user taps on an advertisement, she is led to a web page which may further redirect until the user reaches the final destination. Similarly, applications also embed web links that again lead to the outside Web. Even though the original application may not be malicious, the Web destinations that the user visits could play an important role in propagating attacks. In order to study such attacks we develop a systematic methodology consisting of three components related to triggering web links and advertisements, detecting malware and scam campaigns, and determining the provenance of such campaigns reaching the user. We have realized this methodology through various techniques and contributions and have developed a robust, integrated system capable of running continuously without human intervention. We deployed this system for a two-month period and analyzed over 600,000 applications in the United States and in China while triggering a total of about 1.5 million links in applications to the Web. We gain a general understanding of attacks through the app-web interface as well as make several interesting findings, including a rogue antivirus scam, free iPad and iPhone scams, and advertisements propagating SMS trojans disguised as fake movie players. In broader terms, our system enables locating attacks and identifying the parties (such as specific ad networks, websites, and applications) that intentionally or unintentionally let them reach the end users and, thus, increasing accountability from these parties.", "title": "" }, { "docid": "a33ed384b8f4a86e8cc82970c7074bad", "text": "There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation.", "title": "" }, { "docid": "443df7fa37723021c2079fd524f199ab", "text": "OBJECTIVE\nCircumcision, performed for religious or medical reasons is the procedure of surgical excision of the skin covering the glans penis, preputium in a certain shape and dimension so as to expose the tip of the glans penis. Short- and long- term complication rates of up to 50% have been reported, varying due to the recording system of different countries in which the procedure has been accepted as a widely performed simple surgical procedure. In this study, treatment procedures in patients presented to our clinic with complications after circumcision are described and methods to decrease the rate of the complications are reviewed.\n\n\nMATERIAL AND METODS\nCases that presented to our clinic between 2010 and 2013 with early complications of circumcision were retrospectively reviewed. Cases with acceptedly major complications as excess skin excision, skin necrosis and total amputation of the glans were included in the study, while cases with minor complications such as bleeding, hematoma and infection were excluded from the study.\n\n\nRESULTS\nRepair with full- thickness skin grafts was performed in patients with excess skin excision. In cases with skin necrosis, following the debridement of the necrotic skin, primary repair or repair with full- thickness graft was performed in cases where full- thickness skin defects developed and other cases with partial skin loss were left to secondary healing. Repair with an inguinal flap was performed in the case with glans amputation.\n\n\nCONCLUSION\nCircumcisions performed by untrained individuals are to be blamed for the complications of circumcision reported in this country. The rate of complications increases during the \"circumcision feasts\" where multiple circumcisions were performed. This also predisposes to transmission of various diseases, primarily hepatitis B/C and AIDS. Circumcision is a surgical procedure that should be performed by specialists under appropriate sterile circumstances in which the rate of complications would be decreased. The child may be exposed to recurrent psychosocial and surgical trauma when it is performed by incompetent individuals.", "title": "" }, { "docid": "fd0dccac0689390e77a0cc1fb14e5a34", "text": "Chromatin remodeling is a complex process shaping the nucleosome landscape, thereby regulating the accessibility of transcription factors to regulatory regions of target genes and ultimately managing gene expression. The SWI/SNF (switch/sucrose nonfermentable) complex remodels the nucleosome landscape in an ATP-dependent manner and is divided into the two major subclasses Brahma-associated factor (BAF) and Polybromo Brahma-associated factor (PBAF) complex. Somatic mutations in subunits of the SWI/SNF complex have been associated with different cancers, while germline mutations have been associated with autism spectrum disorder and the neurodevelopmental disorders Coffin–Siris (CSS) and Nicolaides–Baraitser syndromes (NCBRS). CSS is characterized by intellectual disability (ID), coarsening of the face and hypoplasia or absence of the fifth finger- and/or toenails. So far, variants in five of the SWI/SNF subunit-encoding genes ARID1B, SMARCA4, SMARCB1, ARID1A, and SMARCE1 as well as variants in the transcription factor-encoding gene SOX11 have been identified in CSS-affected individuals. ARID2 is a member of the PBAF subcomplex, which until recently had not been linked to any neurodevelopmental phenotypes. In 2015, mutations in the ARID2 gene were associated with intellectual disability. In this study, we report on two individuals with private de novo ARID2 frameshift mutations. Both individuals present with a CSS-like phenotype including ID, coarsening of facial features, other recognizable facial dysmorphisms and hypoplasia of the fifth toenails. Hence, this study identifies mutations in the ARID2 gene as a novel and rare cause for a CSS-like phenotype and enlarges the list of CSS-like genes.", "title": "" }, { "docid": "bdf3417010f59745e4aaa1d47b71c70e", "text": "Recent studies witness the success of Bag-of-Features (BoF) frameworks for video based human action recognition. The detection and description of local interest regions are two fundamental problems in BoF framework. In this paper, we propose a motion boundary based sampling strategy and spatialtemporal (3D) co-occurrence descriptors for action video representation and recognition. Our sampling strategy is partly inspired by the recent success of dense trajectory (DT) based features [1] for action recognition. Compared with DT, we densely sample spatial-temporal cuboids along motion boundary which can greatly reduce the number of valid trajectories while preserve the discriminative power. Moreover, we develop a set of 3D co-occurrence descriptors which take account of the spatial-temporal context within local cuboids and deliver rich information for recognition. Furthermore, we decompose each 3D co-occurrence descriptor at pixel level and bin level and integrate the decomposed components with a multi-channel framework, which can improve the performance significantly. To evaluate the proposed methods, we conduct extensive experiments on three benchmarks including KTH, YouTube and HMDB51. The results show that our sampling strategy significantly reduces the computational cost of point tracking without degrading performance. Meanwhile, we achieve superior performance than the state-ofthe-art methods. We report 95.6% on KTH, 87.6% on YouTube and 51.8% on HMDB51.", "title": "" }, { "docid": "2319dccdb7635a23ab702f10788ea09f", "text": "The molecular basis of obligate anaerobiosis is not well established. Bacteroides thetaiotaomicron is an opportunistic pathogen that cannot grow in fully aerobic habitats. Because microbial niches reflect features of energy-producing strategies, we suspected that aeration would interfere with its central metabolism. In anaerobic medium, this bacterium fermented carbohydrates to a mixture of succinate, propionate and acetate. When cultures were exposed to air, the formation of succinate and propionate ceased abruptly. In vitro analysis demonstrated that the fumarase of the succinate-propionate pathway contains an iron-sulphur cluster that is sensitive to superoxide. In vivo, fumarase activity fell to < 5% when cells were aerated; virtually all activity was recovered after extracts were chemically treated to rebuild iron-sulphur clusters. Aeration minimally affected the remainder of this pathway. However, aeration reduced pyruvate:ferredoxin oxidoreductase (PFOR), the first enzyme in the acetate fermentation branch, to 3% of its anaerobic activity. This cluster-containing enzyme was damaged in vitro by molecular oxygen but not by superoxide. Thus, aerobic growth is precluded by the vulnerability of these iron-sulphur cluster enzymes to oxidation. Importantly, both enzymes were maintained in a stable, inactive form for long periods in aerobic cells; they were then rapidly repaired when the bacterium was returned to anaerobic medium. This result explains how this pathogen can easily recover from occasional exposure to oxygen.", "title": "" }, { "docid": "c326a265fe244b8e6602c321c79da068", "text": "Numerical weather models generate a vast amount of information which requires human interpretation to generate local weather forecasts. Convolutional Neural Networks (CNN) can extract features from images showing unprecedented results in many different domains. In this work, we propose the use of CNN models to interpret numerical weather model data which, by capturing the spatial and temporal relationships between the input variables, can produce local forecasts. Different architectures are compared and a methodology to introspect the models is presented.", "title": "" }, { "docid": "ae23145d649c6df81a34babdfc142b31", "text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.", "title": "" }, { "docid": "324c6f4592ed201aebdb4a1a87740984", "text": "In this paper, we propose the Electric Vehicle Routing Problem with Time Windows and Mixed Fleet (E-VRPTWMF) to optimize the routing of a mixed fleet of electric commercial vehicles (ECVs) and conventional internal combustion commercial vehicles (ICCVs). Contrary to existing routing models for ECVs, which assume energy consumption to be a linear function of traveled distance, we utilize a realistic energy consumption model that incorporates speed, gradient and cargo load distribution. This is highly relevant in the context of ECVs because energy consumption determines the maximal driving range of ECVs and the recharging times at stations. To address the problem, we develop an Adaptive Large Neighborhood Search algorithm that is enhanced by a local search for intensification. In numerical studies on newly designed E-VRPTWMF test instances, we investigate the effect of considering the actual load distribution on the structure and quality of the generated solutions. Moreover, we study the influence of different objective functions on solution attributes and on the contribution of ECVs to the overall routing costs. Finally, we demonstrate the performance of the developed algorithm on benchmark instances of the related problems VRPTW and E-VRPTW.", "title": "" }, { "docid": "03543804dc8cd0a62961cc9df7726d4b", "text": "We introduce The House Of inteRactions (THOR), a framework for visual AI research, available at http:// ai2thor.allenai.org. AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks. AI2-THOR enables research in many different domains including but not limited to deep reinforcement learning, imitation learning, learning by interaction, planning, visual question answering, unsupervised representation learning, object detection and segmentation, and learning models of cognition. The goal of AI2-THOR is to facilitate building visually intelligent models and push the research forward in this domain.", "title": "" }, { "docid": "72977ccc3935149153c46560b8039571", "text": "BACKGROUND\nMelasma is an acquired treatment-resistant hyperpigmentation of the skin.\n\n\nMETHODS\nSixteen women with idiopathic melasma were included in our trial. After randomization by another clinician, they were instructed to use, at night, 5% ascorbic acid cream on one side of the face and 4% hydroquinone cream on the other side, for 16 weeks. Sunscreen was applied daily throughout the period of observation. They were evaluated every month by colorimetry, digital photography, and regular color slides. Subjective evaluation by each patient was also taken into account.\n\n\nRESULTS\nThe best subjective improvement was observed on the hydroquinone side with 93% good and excellent results, compared with 62.5% on the ascorbic acid side (P < 0.05); however, colorimetric measures showed no statistical differences. Side-effects were present in 68.7% (11/16) with hydroquinone vs. 6.2% (1/16) with ascorbic acid.\n\n\nCONCLUSION\nAlthough hydroquinone showed a better response, ascorbic acid may play a role in the therapy of melasma as it is almost devoid of side-effects; it could be used alone or in combination therapy.", "title": "" }, { "docid": "a9e30e02bcbac0f117820d21bf9941da", "text": "The question of how identity is affected when diagnosed with dementia is explored in this capstone thesis. With the rise of dementia diagnoses (Goldstein-Levitas, 2016) there is a need for understanding effective approaches to care as emotional components remain intact. The literature highlights the essence of personhood and how person-centered care (PCC) is essential to preventing isolation and impacting a sense of self and well-being (Killick, 2004). Meeting spiritual needs in the sense of hope and purpose may also improve quality of life and delay symptoms. Dance/movement therapy (DMT) is specifically highlighted as an effective approach as sessions incorporate the components to physically, emotionally, and spiritually stimulate the individual with dementia. A DMT intervention was developed and implemented at an assisted living facility in the Boston area within a specific unit dedicated to the care of residents who had a primary diagnosis of mild to severe dementia. A Chacian framework is used with sensory stimulation techniques to address physiological needs. Results indicated positive experiences from observations and merited the need to conduct more research to credit DMT’s effectiveness with geriatric populations.", "title": "" }, { "docid": "17de9469bca5e0b407c0dd90379860f9", "text": "This paper describes our rewrite of Phoenix, a MapReduce framework for shared-memory CMPs and SMPs. Despite successfully demonstrating the applicability of a MapReduce-style pipeline to shared-memory machines, Phoenix has a number of limitations; its uniform intermediate storage of key-value pairs, inefficient combiner implementation, and poor task overhead amortization fail to efficiently support a wide range of MapReduce applications, encouraging users to manually circumvent the framework. We describe an alternative implementation, Phoenix++, that provides a modular, flexible pipeline that can be easily adapted by the user to the characteristics of a particular workload. Compared to Phoenix, this new approach achieves a 4.7-fold performance improvement and increased scalability, while allowing users to write simple, strict MapReduce code.", "title": "" }, { "docid": "40735be327c91882fdfc2cb57ad12f37", "text": "BACKGROUND\nPolymorphism in the gene for angiotensin-converting enzyme (ACE), especially the DD genotype, is associated with risk for cardiovascular disease. Glomerulosclerosis has similarities to atherosclerosis, and we looked at ACE gene polymorphism in patients with kidney disease who were in a trial of long-term therapy with an ACE inhibitor or a beta-blocker.\n\n\nMETHODS\n81 patients with non-diabetic renal disease had been entered into a randomised comparison of oral atenolol or enalapril to prevent progressive decline in renal function. The dose was titrated to a goal diastolic blood pressure of 10 mm Hg below baseline and/or below 95 mm Hg. The mean (SE) age was 50 (1) years, and the group included 49 men. Their renal function had been monitored over 3-4 years. We have looked at their ACE genotype, which we assessed with PCR.\n\n\nFINDINGS\n27 patients had the II genotype, 37 were ID, and 17 were DD. 11 patients were lost to follow-up over 1-3 years. The decline of glomerular filtration rate over the years was significantly steeper in the DD group than in the ID and the II groups (p = 0.02; means -3.79, -1.37, and -1.12 mL/min per year, respectively). The DD patients treated with enalapril fared as equally a bad course as the DD patients treated with atenolol. Neither drug lowered the degree of proteinuria in the DD group.\n\n\nINTERPRETATION\nOur data show that patients with the DD genotype are resistant to commonly advocated renoprotective therapy.", "title": "" }, { "docid": "87518b738a57fe28197f65af20199b0a", "text": "Crowdsourced clustering approaches present a promising way to harness deep semantic knowledge for clustering complex information. However, existing approaches have difficulties supporting the global context needed for workers to generate meaningful categories, and are costly because all items require human judgments. We introduce Alloy, a hybrid approach that combines the richness of human judgments with the power of machine algorithms. Alloy supports greater global context through a new \"sample and search\" crowd pattern which changes the crowd's task from classifying a fixed subset of items to actively sampling and querying the entire dataset. It also improves efficiency through a two phase process in which crowds provide examples to help a machine cluster the head of the distribution, then classify low-confidence examples in the tail. To accomplish this, Alloy introduces a modular \"cast and gather\" approach which leverages a machine learning backbone to stitch together different types of judgment tasks.", "title": "" }, { "docid": "6fe5f8c299cbcff1b2b5f3f944e6ef75", "text": "Microservices are a new trend rising fast from the enterprise world. Even though the design principles around microservices have been identified, it is difficult to have a clear view of existing research solutions for architecting microservices. In this paper we apply the systematic mapping study methodology to identify, classify, and evaluate the current state of the art on architecting microservices from the following three perspectives: publication trends, focus of research, and potential for industrial adoption. More specifically, we systematically define a classification framework for categorizing the research on architecting microservices and we rigorously apply it to the 71 selected studies. We synthesize the obtained data and produce a clear overview of the state of the art. This gives a solid basis to plan for future research and applications of architecting microservices.", "title": "" }, { "docid": "ad059332e36849857c9bf1a52d5b0255", "text": "Interaction Design Beyond Human Computer Interaction instructions guide, service manual guide and maintenance manual guide for the products. Before employing this manual, service or maintenance guide you should know detail regarding your products cause this manual for expert only. We hope ford alternator wiring diagram internal regulator and yet another manual of these lists a good choice for your to repair, fix and solve your product or service or device problems don't try an oversight.", "title": "" }, { "docid": "d2a1ecb8ad28ed5ba75460827341f741", "text": "Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.", "title": "" }, { "docid": "2f0d6b9bee323a75eea3d15a3cabaeb6", "text": "OBJECTIVE\nThis article reviews the mechanisms and pathophysiology of traumatic brain injury (TBI).\n\n\nMETHODS\nResearch on the pathophysiology of diffuse and focal TBI is reviewed with an emphasis on damage that occurs at the cellular level. The mechanisms of injury are discussed in detail including the factors and time course associated with mild to severe diffuse injury as well as the pathophysiology of focal injuries. Examples of electrophysiologic procedures consistent with recent theory and research evidence are presented.\n\n\nRESULTS\nAcceleration/deceleration (A/D) forces rarely cause shearing of nervous tissue, but instead, initiate a pathophysiologic process with a well defined temporal progression. The injury foci are considered to be diffuse trauma to white matter with damage occurring at the superficial layers of the brain, and extending inward as A/D forces increase. Focal injuries result in primary injuries to neurons and the surrounding cerebrovasculature, with secondary damage occurring due to ischemia and a cytotoxic cascade. A subset of electrophysiologic procedures consistent with current TBI research is briefly reviewed.\n\n\nCONCLUSIONS\nThe pathophysiology of TBI occurs over time, in a pattern consistent with the physics of injury. The development of electrophysiologic procedures designed to detect specific patterns of change related to TBI may be of most use to the neurophysiologist.\n\n\nSIGNIFICANCE\nThis article provides an up-to-date review of the mechanisms and pathophysiology of TBI and attempts to address misconceptions in the existing literature.", "title": "" } ]
scidocsrr
4d5ebcaababcd7635948dd325056bb5f
Passphone: Outsourcing Phone-based Web Authentication while Protecting User Privacy
[ { "docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2", "text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.", "title": "" }, { "docid": "c733ee2715e69a674f3e8db46ca8c5b3", "text": "Authentication is of paramount importance for all modern networked applications. The username/password paradigm is ubiquitous. This paradigm suffices for many applications that require BLOCKIN BLOCKIN a BLOCKIN BLOCKIN relatively BLOCKIN BLOCKIN low BLOCKIN BLOCKIN level BLOCKIN BLOCKIN of BLOCKIN BLOCKIN assurance BLOCKIN BLOCKIN about BLOCKIN BLOCKIN the BLOCKIN BLOCKIN identity BLOCKIN BLOCKIN of BLOCKIN BLOCKIN the BLOCKIN BLOCKIN end BLOCKIN BLOCKIN user, BLOCKIN BLOCKIN but BLOCKIN BLOCKIN it BLOCKIN BLOCKIN quickly BLOCKIN BLOCKIN breaks down when a stronger assertion of the user's identity is required. Traditionally, this is where two-­‐ or multi-­‐factor authentication comes in, providing a higher level of assurance. There is a multitude of BLOCKIN BLOCKIN two-­‐factor BLOCKIN BLOCKIN authentication BLOCKIN BLOCKIN solutions BLOCKIN BLOCKIN available, BLOCKIN BLOCKIN but BLOCKIN BLOCKIN we BLOCKIN BLOCKIN feel BLOCKIN BLOCKIN that BLOCKIN BLOCKIN many BLOCKIN BLOCKIN solutions BLOCKIN BLOCKIN do BLOCKIN BLOCKIN not BLOCKIN BLOCKIN meet BLOCKIN BLOCKIN the needs of our community. They are invariably expensive, difficult to roll out in heterogeneous user groups (like student populations), often closed source and closed technology and have usability problems that make them hard to use. In this paper we will give an overview of the two-­‐factor au-­‐ thentication landscape and address the issues of closed versus open solutions. We will introduce a novel open standards-­‐based authentication technology that we have developed and released in open source. We will then provide a classification of two-­‐factor authentication technologies, and we will finish with an overview of future work.", "title": "" } ]
[ { "docid": "65385d7aee49806476dc913f6768fc43", "text": "Software developers spend a significant portion of their resources handling user-submitted bug reports. For software that is widely deployed, the number of bug reports typically outstrips the resources available to triage them. As a result, some reports may be dealt with too slowly or not at all. \n We present a descriptive model of bug report quality based on a statistical analysis of surface features of over 27,000 publicly available bug reports for the Mozilla Firefox project. The model predicts whether a bug report is triaged within a given amount of time. Our analysis of this model has implications for bug reporting systems and suggests features that should be emphasized when composing bug reports. \n We evaluate our model empirically based on its hypothetical performance as an automatic filter of incoming bug reports. Our results show that our model performs significantly better than chance in terms of precision and recall. In addition, we show that our modelcan reduce the overall cost of software maintenance in a setting where the average cost of addressing a bug report is more than 2% of the cost of ignoring an important bug report.", "title": "" }, { "docid": "ca8da405a67d3b8a30337bc23dfce0cc", "text": "Object detection is one of the most important tasks of computer vision. It is usually performed by evaluating a subset of the possible locations of an image, that are more likely to contain the object of interest. Exhaustive approaches have now been superseded by object proposal methods. The interplay of detectors and proposal algorithms has not been fully analyzed and exploited up to now, although this is a very relevant problem for object detection in video sequences. We propose to connect, in a closed-loop, detectors and object proposal generator functions exploiting the ordered and continuous nature of video sequences. Different from tracking we only require a previous frame to improve both proposal and detection: no prediction based on local motion is performed, thus avoiding tracking errors. We obtain three to four points of improvement in mAP and a detection time that is lower than Faster Regions with CNN features (R-CNN), which is the fastest Convolutional Neural Network (CNN) based generic object detector known at the moment.", "title": "" }, { "docid": "3508a963a4f99d02d9c41dab6801d8fd", "text": "The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students’ comprehension and learning. This comprehensive meta-analysis of empirical studies was conducted to examine evidence of the effects of classroom discussion on measures of teacher and student talk and on individual student comprehension and critical-thinking and reasoning outcomes. Results revealed that several discussion approaches produced strong increases in the amount of student talk and concomitant reductions in teacher talk, as well as substantial improvements in text comprehension. Few approaches to discussion were effective at increasing students’ literal or inferential comprehension and critical thinking and reasoning. Effects were moderated by study design, the nature of the outcome measure, and student academic ability. While the range of ages of participants in the reviewed studies was large, a majority of studies were conducted with students in 4th through 6th grades. Implications for research and practice are discussed.", "title": "" }, { "docid": "4bad90ae99a3b3fcbf26662a0d6c9fc7", "text": "Driver fatigue is the major cause of accidents in the world. Detecting the drowsiness of the driver is the surest ways of measuring the driver fatigue. The purpose of this paper is to develop a drowsiness detection system. This system works by analyzing the eye movement of the driver and alerting the driver by activating the buzzer when he/she is drowsy. The system so implemented is a nonintrusive real-time monitoring system for eye detection. During monitoring, the system is able to decide whether the eyes were opened or closed. When the eyes were detected closed for too long, a signal was issued to warn the driver. In addition, the system also have an option for making vibration when drowsiness was detected. The aim is on improving the safety of the driver without being obtrusive. Visual cues were obtained through eye blink rate by using a camera, which typically characterize the level of alertness of a person. These were extracted in real-time and systematically joined to check the fatigue level of the driver. The system can monitor the driver's eyes to detect short periods of sleep lasting 3 to 4 seconds. The system implemented in this approach runs at 8-15 frames per second. The application was implemented using Open CV in Raspberry Pi environment with a single camera view. This system was used to detect the drowsiness of the driver and thereby reducing the road accidents.", "title": "" }, { "docid": "1f6d0e820b169d13e961b672b75bde71", "text": "Prenatal stress can cause long-term effects on cognitive functions in offspring. Hippocampal synaptic plasticity, believed to be the mechanism underlying certain types of learning and memory, and known to be sensitive to behavioral stress, can be changed by prenatal stress. Whether enriched environment treatment (EE) in early postnatal periods can cause a recovery from these deficits is unknown. Experimental animals were Wistar rats. Prenatal stress was evoked by 10 foot shocks (0.8 mA for 1s, 2-3 min apart) in 30 min per day at gestational day 13-19. After weaning at postnatal day 22, experimental offspring were given the enriched environment treatment through all experiments until tested (older than 52 days age). Electrophysiological and Morris water maze testing was performed at 8 weeks of age. The results showed that prenatal stress impaired long-term potentiation (LTP) but facilitated long-term depression (LTD) in the hippocampal CA1 region in the slices. Furthermore, prenatal stress exacerbated the effects of acute stress on hippocampal LTP and LTD, and also impaired spatial learning and memory in the Morris water maze. However, all these deficits induced by prenatal stress were recovered by enriched environment treatment. This work observes a phenomenon that may contribute to the understanding of clinically important interactions among cognitive deficit, prenatal stress and enriched environment treatment. Enriched environment treatment on early postnatal periods may be one potentially important target for therapeutic interventions in preventing the prenatal stress-induced cognitive disorders.", "title": "" }, { "docid": "5ffd50dec7e617a3a0ee4517064f9b9f", "text": "PURPOSE\nThe authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children.\n\n\nMETHOD\nChildren (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing tests included the Frequency Pattern Test (FPT; F. E. Musiek, 1994; D. Noffsinger, R. H. Wilson, & F. E. Musiek, 1994); the Dichotic Digit Test Version 2 (DDT; F. E. Musiek, 1983); the Random Gap Detection Test (R. W. Keith, 2000); the 500-Hz tone Masking Level Difference (V. Aithal, A. Yonovitz, & S. Aithal, 2006); and a monaural low-redundancy speech test (compressed and reverberant words; A. Boothroyd & S. Nittrouer, 1988). The Clinical Evaluation of Language Fundamentals, Fourth Edition (E. Semel, E. Wiig, & W. Secord, 2003) was used to assess language abilities (including auditory memory). Reading accuracy and fluency and phonological awareness abilities were assessed using the Wheldall Assessment of Reading Passages (A. Madelaine & K. Wheldall, 2002) and the Queensland University Inventory of Literacy (B. Dodd, A. Holm, M. Orelemans, & M. McCormick, 1996). Attention was measured using the Integrated Visual and Auditory Continuous Performance Test (J. A. Sandford & A. Turner, 1995).\n\n\nRESULTS\nOf the children, 72% had APD on the basis of these test results. Most of these children (25%) had difficulty with the FPT bilaterally. A further 22% had difficulty with the FPT bilaterally and had right ear deficits for the DDT. About half of the children (47%) had problems in all 3 areas (APD, LI, and RD); these children had the poorest FPT scores. More had APD-RD, or APD-LI, than APD, RD, or LI alone. There were modest correlations between FPT scores and attention and memory, and between DDT scores and memory.\n\n\nCONCLUSIONS\nLI and RD commonly co-occur with APD. Attention and memory are linked to performance on some auditory processing tasks but only explain a small amount of the variance in scores. Comprehensive assessment across a range of areas is required to characterize the difficulties experienced by children with APD.", "title": "" }, { "docid": "a395993ce7fb6fa144b79364724cd7dc", "text": "High cesarean birth rates are an issue of international public health concern.1 Worries over such increases have led the World Health Organization to advise that Cesarean Section (CS) rates should not be more than 15%,2 with some evidence that CS rates above 15% are not associated with additional reduction in maternal and neonatal mortality and morbidity.3 Analyzing CS rates in different countries, including primary vs. repeat CS and potential reasons of these, provide important insights into the solution for reducing the overall CS rate. Robson,4 proposed a new classification system, the Robson Ten-Group Classification System to allow critical analysis according to characteristics of pregnancy (Table 1). The characteristics used are: (i) single or multiple pregnancy (ii) nulliparous, multiparous, or multiparous with a previous CS (iii) cephalic, breech presentation or other malpresentation (iv) spontaneous or induced labor (v) term or preterm births.", "title": "" }, { "docid": "dee5489accb832615f63623bc445212f", "text": "In this paper a simulation-based scheduling system is discussed which was developed for a semiconductor Backend facility. Apart from the usual dispatching rules it uses heuristic search strategies for the optimization of the operating sequences. In practice hereby multiple objectives have to be considered, e. g. concurrent minimization of mean cycle time, maximization of throughput and due date compliance. Because the simulation model is very complex and simulation time itself is not negligible, we emphasize to increase the convergence of heuristic optimization methods, consequentially reducing the number of necessary iterations. Several realized strategies are presented.", "title": "" }, { "docid": "bd60d22618da150fab8f769dbd0a7bca", "text": "All induction heating applied systems are developed using electromagnetic induction. Electromagnetic induction refers to the phenomenon by which electric current is generated in a closed circuit by the fluctuation of current in another circuit placed next to it. The basic principle of induction heating is that AC current flowing through a primarily circuit induces a current in the load (the secondary circuit) located near it and heating the load. The intent of this presentation is to present the details of half-bridge power inverter via a comprehensive analysis with operation equations of the circuit and their solving using specific sotware. An alternative to the use of circuit-oriented simulators for study of these circuits operating is to describe the circuit and the controller by means of differential and algebraic equations. We must develop the equations for all possible states in which the circuit may operate. These algebraic/differential equations can be solved by using of software packages specifically designed for this purpose that provide a choice of integration routines, graphical output, and so on.", "title": "" }, { "docid": "1748b3d85525b0058573339493813c60", "text": "BACKGROUND\nTopical minoxidil solution 2% stimulates new hair growth and helps stop the loss of hair in individuals with androgenetic alopecia (AGA). Results can be variable, and historical experience suggests that higher concentrations of topical minoxidil may enhance efficacy.\n\n\nOBJECTIVE\nThe purpose of this 48-week, double-blind, placebo-controlled, randomized, multicenter trial was to compare 5% topical minoxidil with 2% topical minoxidil and placebo in the treatment of men with AGA.\n\n\nMETHODS\nA total of 393 men (18-49 years old) with AGA applied 5% topical minoxidil solution (n = 157), 2% topical minoxidil solution (n = 158), or placebo (vehicle for 5% solution; n = 78) twice daily. Efficacy was evaluated by scalp target area hair counts and patient and investigator assessments of change in scalp coverage and benefit of treatment.\n\n\nRESULTS\nAfter 48 weeks of therapy, 5% topical minoxidil was significantly superior to 2% topical minoxidil and placebo in terms of change from baseline in nonvellus hair count, patient rating of scalp coverage and treatment benefit, and investigator rating of scalp coverage. Hair count data indicate that response to treatment occurred earlier with 5% compared with 2% topical minoxidil. Additionally, data from a patient questionnaire on quality of life, global benefit, hair growth, and hair styling demonstrated that 5% topical minoxidil helped improve patients' psychosocial perceptions of hair loss. An increased occurrence of pruritus and local irritation was observed with 5% topical minoxidil compared with 2% topical minoxidil.\n\n\nCONCLUSION\nIn men with AGA, 5% topical minoxidil was clearly superior to 2% topical minoxidil and placebo in increasing hair regrowth, and the magnitude of its effect was marked (45% more hair regrowth than 2% topical minoxidil at week 48). Men who used 5% topical minoxidil also had an earlier response to treatment than those who used 2% topical minoxidil. Psychosocial perceptions of hair loss in men with AGA were also improved. Topical minoxidil (5% and 2%) was well tolerated by the men in this trial without evidence of systemic effects.", "title": "" }, { "docid": "bd7581bbb11e45685ccf44af8328c1dd", "text": "The Full Bridge converter topology modulated in phase shift is one of the most popular converters used to obtain high efficiency conversion, especially in high power and high voltage applications. This converter topology combines the simplicity of fixed frequency modulations with the soft switching characteristic of resonant converters but, if a diode rectifier is used as output stage, it suffers of severe overshoot voltage spikes and ringing across the rectifier. In this paper, a new regenerative active snubber is widely studied and developed to reduce this drawback. The proposed snubber is based on a combination of an active clamp with a buck converter used to discharge the snubber capacitor. The snubber gate signal is obtained by using those of the phase shift modulation.", "title": "" }, { "docid": "d7aeb8de7bf484cbaf8e23fcf675d002", "text": "One method for detecting fraud is to check for suspicious changes in user behavior. This paper proposes a novel method, built upon ontology and ontology instance similarity. Ontology is now widely used to enable knowledge sharing and reuse, so some personality ontologies can be easily used to present user behavior. By measure the similarity of ontology instances, we can determine whether an account is defrauded. This method lows the data model cost and make the system very adaptive to different applications.", "title": "" }, { "docid": "e03640352c1b0074a0bdd21cafbda61e", "text": "The problem of finding an automatic thresholding technique is well known in applications involving image differencing like visual-based surveillance systems, autonomous vehicle driving, etc. Among the algorithms proposed in the past years, the thresholding technique based on the stable Euler number method is considered one of the most promising in terms of visual results. Unfortunately its high computational complexity made it an impossible choice for real-time applications. The implementation here proposed, called fast Euler numbers, overcomes the problem since it calculates all the Euler numbers in just one single raster scan of the image. That is, it runs in OðhwÞ, where h and w are the image s height and width, respectively. A technique for determining the optimal threshold, called zero crossing, is also proposed. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "f86a439034e87d2df63994b428fbd3aa", "text": "OBJECTIVES\nTo describe an alar cartilage-modifying technique aimed at decreasing nasal tip projection in cases with overdeveloped alar cartilages and to compare it with other deprojection techniques used to correct such deformity.\n\n\nDESIGN\nSelected case series.\n\n\nSETTINGS\nUniversity and private practice settings in Alexandria, Egypt.\n\n\nPATIENTS\nTwenty patients presenting for rhinoplasty who had overprojected nasal tips primarily due to overdeveloped alar cartilages. All cases were primary cases except for one patient, who had undergone 2 previous rhinoplasties.\n\n\nINTERVENTION\nAn external rhinoplasty approach was used to set back the alar cartilages by shortening their medial and lateral crura. The choice of performing a high or low setback depended on the preexisting lobule-to-columella ratio. Following the setback, the alar cartilages were reconstructed in a fashion that increased the strength and stability of the tip complex.\n\n\nMAIN OUTCOME MEASURES\nSubjective evaluation included clinical examination, analysis of preoperative and postoperative photographs, and patient satisfaction. Objective evaluation of nasal tip projection, using the Goode ratio and the nasofacial angle, was performed preoperatively and repeated at least 6 months postoperatively.\n\n\nRESULTS\nA low setback was performed in 16 cases (80%) and a high setback in 4 (20%). The mean follow-up period was 18 months (range, 6-36 months). The technique effectively deprojected the nasal tip as evidenced by the considerable postoperative decrease in values of the Goode ratio and the nasofacial angle. No complications were encountered and no revision surgical procedures were required.\n\n\nCONCLUSIONS\nThe alar setback technique has many advantages; it results in precise predictable amounts of deprojection, controls the degree of tip rotation, preserves the natural contour of the nasal tip, respects the tip support mechanisms, increases the strength and stability of nasal tip complex, preserves or restores the normal lobule-to-columella proportion, and does not lead to alar flaring. However, the technique requires an external rhinoplasty approach and fine technical precision.", "title": "" }, { "docid": "5378e05d2d231969877131a011b3606a", "text": "Environmental, health, and safety (EHS) concerns are receiving considerable attention in nanoscience and nanotechnology (nano) research and development (R&D). Policymakers and others have urged that research on nano's EHS implications be developed alongside scientific research in the nano domain rather than subsequent to applications. This concurrent perspective suggests the importance of early understanding and measurement of the diffusion of nano EHS research. The paper examines the diffusion of nano EHS publications, defined through a set of search terms, into the broader nano domain using a global nanotechnology R&D database developed at Georgia Tech. The results indicate that nano EHS research is growing rapidly although it is orders of magnitude smaller than the broader nano S&T domain. Nano EHS work is moderately multidisciplinary, but gaps in biomedical nano EHS's connections with environmental nano EHS are apparent. The paper discusses the implications of these results for the continued monitoring and development of the cross-disciplinary utilization of nano EHS research.", "title": "" }, { "docid": "35260e253551bcfd21ce6d08c707f092", "text": "Current debugging and optimization methods scale poorly to deal with the complexity of modern Internet services, in which a single request triggers parallel execution of numerous heterogeneous software components over a distributed set of computers. The Achilles’ heel of current methods is the need for a complete and accurate model of the system under observation: producing such a model is challenging because it requires either assimilating the collective knowledge of hundreds of programmers responsible for the individual components or restricting the ways in which components interact. Fortunately, the scale of modern Internet services offers a compensating benefit: the sheer volume of requests serviced means that, even at low sampling rates, one can gather a tremendous amount of empirical performance observations and apply “big data” techniques to analyze those observations. In this paper, we show how one can automatically construct a model of request execution from pre-existing component logs by generating a large number of potential hypotheses about program behavior and rejecting hypotheses contradicted by the empirical observations. We also show how one can validate potential performance improvements without costly implementation effort by leveraging the variation in component behavior that arises naturally over large numbers of requests to measure the impact of optimizing individual components or changing scheduling behavior. We validate our methodology by analyzing performance traces of over 1.3 million requests to Facebook servers. We present a detailed study of the factors that affect the end-to-end latency of such requests. We also use our methodology to suggest and validate a scheduling optimization for improving Facebook request latency.", "title": "" }, { "docid": "0e68fa08edfc2dcb52585b13d0117bf1", "text": "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE’s code is available on GitHub at https://github.com/Mehran-k/SimplE.", "title": "" }, { "docid": "e5f2101e7937c61a4d6b11d4525a7ed8", "text": "This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.", "title": "" }, { "docid": "bcf69b1d42d28b8ba66b133ad6421cc4", "text": "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1", "title": "" }, { "docid": "a2eee3cd0e8ee3e97af54f11b8a29fc9", "text": "Internet Service Providers (ISPs) are responsible for transmitting and delivering their customers’ data requests, ranging from requests for data from websites, to that from filesharing applications, to that from participants in Voice over Internet Protocol (VoIP) chat sessions. Using contemporary packet inspection and capture technologies, ISPs can investigate and record the content of unencrypted digital communications data packets. This paper explains the structure of these packets, and then proceeds to describe the packet inspection technologies that monitor their movement and extract information from the packets as they flow across ISP networks. After discussing the potency of contemporary deep packet inspection devices, in relation to their earlier packet inspection predecessors, and their potential uses in improving network operators’ network management systems, I argue that they should be identified as surveillance technologies that can potentially be incredibly invasive. Drawing on Canadian examples, I argue that Canadian ISPs are using DPI technologies to implicitly ‘teach’ their customers norms about what are ‘inappropriate’ data transfer programs, and the appropriate levels of ISP manipulation of customer data traffic. Version 1.2 :: January 10, 2008. * Doctoral student in the University of Victoria’s Political Science department. Thanks to Colin Bennett, Andrew Clement, Fenwick Mckelvey and Joyce Parsons for comments.", "title": "" } ]
scidocsrr
b882b94513763485b754e9019e2d5bde
A Highly Efficient and Linear Power Amplifier for 28-GHz 5G Phased Array Radios in 28-nm CMOS
[ { "docid": "a0dac5a10c9e81e5d1f750513b56849d", "text": "This paper presents an in-depth study of a 45-nm CMOS silicon-on-insulator (SOI) technology. Several transistor test cells are characterized and the effect of finger width, gate contact, and gate poly pitch on transistor performance is analyzed. The measured peak ft is 264 GHz for a 30 × 1007 nm single-gate contact relaxed-pitch transistor and the best fmax of 283 GHz is achieved by a 58 × 513 nm single-gate contact regular pitch transistor. The measured transistor performance agrees well with the simulations including R/C extraction up to the top metal layer. Passive components are also characterized and their performance is predicted accurately with design kit models and electromagnetic simulations. Low-noise amplifiers from Q- to W-band are developed in this technology and they achieve state-of-the-art noise-figure values.", "title": "" }, { "docid": "ed676ff14af6baf9bde3bdb314628222", "text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.", "title": "" } ]
[ { "docid": "268e0e06a23f495cc36958dafaaa045a", "text": "Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one’s experiences—a hallmark of human intelligence from infancy—remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between “hand-engineering” and “end-to-end” learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias—the graph network—which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have also released an open-source software library for building graph networks, with demonstrations of how to use them in practice.", "title": "" }, { "docid": "7bd0d6ef1d523c49c1a1595e31413e31", "text": "Germination vigor is driven by the ability of the plant embryo, embedded within the seed, to resume its metabolic activity in a coordinated and sequential manner. Studies using \"-omics\" approaches support the finding that a main contributor of seed germination success is the quality of the messenger RNAs stored during embryo maturation on the mother plant. In addition, proteostasis and DNA integrity play a major role in the germination phenotype. Because of its pivotal role in cell metabolism and its close relationships with hormone signaling pathways regulating seed germination, the sulfur amino acid metabolism pathway represents a key biochemical determinant of the commitment of the seed to initiate its development toward germination. This review highlights that germination vigor depends on multiple biochemical and molecular variables. Their characterization is expected to deliver new markers of seed quality that can be used in breeding programs and/or in biotechnological approaches to improve crop yields.", "title": "" }, { "docid": "a65b11ebb320e4883229f4a50d51ae2f", "text": "Vast quantities of text are becoming available in electronic form, ranging from published documents (e.g., electronic dictionaries, encyclopedias, libraries and archives for information retrieval services), to private databases (e.g., marketing information, legal records, medical histories), to personal email and faxes. Online information services are reaching mainstream computer users. There were over 15 million Internet users in 1993, and projections are for 30 million in 1997. With media attention reaching all-time highs, hardly a day goes by without a new article on the National Information Infrastructure, digital libraries, networked services, digital convergence or intelligent agents. This attention is moving natural language processing along the critical path for all kinds of novel applications.", "title": "" }, { "docid": "0f37f7306f879ca0b5d35516a64818fb", "text": "Much of empirical corporate finance focuses on sources of the demand for various forms of capital, not the supply. Recently, this has changed. Supply effects of equity and credit markets can arise from a combination of three ingredients: investor tastes, limited intermediation, and corporate opportunism. Investor tastes when combined with imperfectly competitive intermediaries lead prices and interest rates to deviate from fundamental values. Opportunistic firms respond by issuing securities with high prices and investing the proceeds. A link between capital market prices and corporate finance can in principle come from either supply or demand. This framework helps to organize empirical approaches that more precisely identify and quantify supply effects through variation in one of these three ingredients. Taken as a whole, the evidence shows that shifting equity and credit market conditions play an important role in dictating corporate finance and investment. 181 A nn u. R ev . F in . E co n. 2 00 9. 1: 18 120 5. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by H ar va rd U ni ve rs ity o n 02 /1 1/ 14 . F or p er so na l u se o nl y.", "title": "" }, { "docid": "f9836f7f1d3ffbdc1fe2912616c2375a", "text": "Wind is often regarded as the foe of tall buildings since it tends to be the governing lateral load. Careful aerodynamic design of tall buildings through wind tunnel testing can greatly reduce wind loads and their affect on building motions. Various shaping strategies are discussed, aimed particularly at suppression of vortex shedding since it is frequently the cause of crosswind excitation. The use of supplementary damping systems is another approach that takes the energy out of building motions and reduces loads. Different applications of damping systems are described on several buildings, and an example of material savings and reduced carbon emissions is given. Wind also has some potential beneficial effects particular to tall buildings. One is that, since wind speeds are higher at the heights of tall buildings, the potential for extracting wind energy using wind turbines is significantly improved compared with ground level. The paper explores how much energy might be generated in this way relative to the building's energy usage. Other benefits are to be found in judicious use of natural ventilation, sometimes involving double layer wall systems, and, in hot climates, the combination of tailored wind and shade conditions to improve outdoor comfort near tall buildings and on balconies and terraces.", "title": "" }, { "docid": "5a35d547a619e2e5ce0eb10b494ae030", "text": "Acute and chronic sports-related traumatic brain injuries (TBIs) are a substantial public health concern. Various types of acute TBI can occur in sport, but detection and management of cerebral concussion is of greatest importance as mismanagement of this syndrome can lead to persistent or chronic postconcussion syndrome (CPCS) or diffuse cerebral swelling. Chronic TBI encompasses a spectrum of disorders that are associated with long-term consequences of brain injury, including chronic traumatic encephalopathy (CTE), dementia pugilistica, post-traumatic parkinsonism, post-traumatic dementia and CPCS. CTE is the prototype of chronic TBI, but can only be definitively diagnosed at autopsy as no reliable biomarkers of this disorder are available. Whether CTE shares neuropathological features with CPCS is unknown. Evidence suggests that participation in contact–collision sports may increase the risk of neurodegenerative disorders such as Alzheimer disease, but the data are conflicting. In this Review, the spectrum of acute and chronic sport-related TBI is discussed, highlighting how examination of athletes involved in high-impact sports has advanced our understanding of pathology of brain injury and enabled improvements in detection and diagnosis of sport-related TBI.", "title": "" }, { "docid": "5c6c7ab45d99dcc6beb6b03c38d4e065", "text": "Text message stream which is produced by Instant Messager and Internet Relay Chat poses interesting and challenging problems for information technologies. It is beneficial to extract the conversations in this kind of chatting message stream for information management and knowledge finding. However, the data in text message stream are usually very short and incomplete, and it requires efficiency to monitor thousands of continuous chat sessions. Many existing text mining methods encounter challenges. This paper focuses on the conversation extraction in dynamic text message stream. We design the dynamic representation for messages to combine the text content information and linguistic feature in message stream. A memory structure of reversed maximal similar relationship is developed for renewable assignments when grouping messages into conversations. We finally propose a double time window algorithm based on above methods to extract conversations in dynamic text message stream. Experiments on a real dataset shows that our method outperforms two baseline methods introduced in a recent related paper about 47% and 15% in terms of F measure", "title": "" }, { "docid": "c2195ae053d1bbf712c96a442a911e31", "text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.", "title": "" }, { "docid": "53c5366ddb389e4b4822e5395e416380", "text": "Information exchange in the just about any cluster of computer has to be more secure in the era of cloud computing and big data. Steganography helps to prevent illegal attention through covering the secret message in a number of digitally electronically representative media, without hurting the accessibility of secret message. Image steganography methods are recently been helpful to send any secret message in the protected image carrier to prevent threats and attacks whereas it does not give any kind of opportunity to hackers to find out the secret concept. Inside a steganographic system secrets information is embedded inside of a cover file, to ensure that no one will suspect that anything perhaps there is inside carrier. The cover file could be image, audio or video. To really make it safer, the secrets information might be encrypted embedded then, it will be decrypted in the receiver. In this paper, we are reviewing some digital image steganographic techniques depending on LSB (least significant bit) & LSB array concept.", "title": "" }, { "docid": "983ec9cdd75d0860c96f89f3c9b2f752", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8de09be7888299dc5dd30bbeb5578c35", "text": "Scene text detection is challenging as the input may have different orientations, sizes, font styles, lighting conditions, perspective distortions and languages. This paper addresses the problem by designing a Rotational Region CNN (R2CNN). R2CNN includes a Text Region Proposal Network (Text-RPN) to estimate approximate text regions and a multitask refinement network to get the precise inclined box. Our work has the following features. First, we use a novel multi-task regression method to support arbitrarily-oriented scene text detection. Second, we introduce multiple ROIPoolings to address the scene text detection problem for the first time. Third, we use an inclined Non-Maximum Suppression (NMS) to post-process the detection candidates. Experiments show that our method outperforms the state-of-the-art on standard benchmarks: ICDAR 2013, ICDAR 2015, COCO-Text and MSRA-TD500.", "title": "" }, { "docid": "69be80d84b30099286a36c3e653281d3", "text": "Since the middle ages, essential oils have been widely used for bactericidal, virucidal, fungicidal, antiparasitical, insecticidal, medicinal and cosmetic applications, especially nowadays in pharmaceutical, sanitary, cosmetic, agricultural and food industries. Because of the mode of extraction, mostly by distillation from aromatic plants, they contain a variety of volatile molecules such as terpenes and terpenoids, phenol-derived aromatic components and aliphatic components. In vitro physicochemical assays characterise most of them as antioxidants. However, recent work shows that in eukaryotic cells, essential oils can act as prooxidants affecting inner cell membranes and organelles such as mitochondria. Depending on type and concentration, they exhibit cytotoxic effects on living cells but are usually non-genotoxic. In some cases, changes in intracellular redox potential and mitochondrial dysfunction induced by essential oils can be associated with their capacity to exert antigenotoxic effects. These findings suggest that, at least in part, the encountered beneficial effects of essential oils are due to prooxidant effects on the cellular level.", "title": "" }, { "docid": "9128e3786ba8d0ab36aa2445d84de91c", "text": "A technique for the correction of flat or inverted nipples is presented. The procedure is a combination of the square flap method, which better shapes the corrected nipple, and the dermal sling, which provides good support for the repaired nipple.", "title": "" }, { "docid": "bd1523c64d8ec69d87cbe68a4d73ea17", "text": "BACKGROUND AND OBJECTIVE\nThe effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library.\n\n\nMETHODS\nBased on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library.\n\n\nRESULTS\nWe have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest.\n\n\nCONCLUSIONS\nThe IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems.", "title": "" }, { "docid": "9a9dc194e0ca7d1bb825e8aed5c9b4fe", "text": "In this paper we show how to divide data <italic>D</italic> into <italic>n</italic> pieces in such a way that <italic>D</italic> is easily reconstructable from any <italic>k</italic> pieces, but even complete knowledge of <italic>k</italic> - 1 pieces reveals absolutely no information about <italic>D</italic>. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.", "title": "" }, { "docid": "3473e863d335725776281fe2082b756f", "text": "Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.", "title": "" }, { "docid": "b687f595e8fb018702e26bc97afcf09b", "text": "The Markov Random Walk model has been recently exploited for multi-document summarization by making use of the link relationships between sentences in the document set, under the assumption that all the sentences are indistinguishable from each other. However, a given document set usually covers a few topic themes with each theme represented by a cluster of sentences. The topic themes are usually not equally important and the sentences in an important theme cluster are deemed more salient than the sentences in a trivial theme cluster. This paper proposes the Cluster-based Conditional Markov Random Walk Model (ClusterCMRW) and the Cluster-based HITS Model (ClusterHITS) to fully leverage the cluster-level information. Experimental results on the DUC2001 and DUC2002 datasets demonstrate the good effectiveness of our proposed summarization models. The results also demonstrate that the ClusterCMRW model is more robust than the ClusterHITS model, with respect to different cluster numbers.", "title": "" }, { "docid": "73c8bc0cbe31fad45519eb4066d307ff", "text": "The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very timeconsuming to obtain; image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9% mIOU over image-level supervision. Further, we demonstrate that models trained with pointlevel supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.", "title": "" }, { "docid": "ab3f8779b9c3347cece106ea6195e8cd", "text": "We present a novel modular architecture for StarCraft II AI. The architecture splits responsibilities between multiple modules that each control one aspect of the game, such as build-order selection or tactics. A centralized scheduler reviews macros suggested by all modules and decides their order of execution. An updater keeps track of environment changes and instantiates macros into series of executable actions. Modules in this framework can be optimized independently or jointly via human design, planning, or reinforcement learning. We apply deep reinforcement learning techniques to training two out of six modules of a modular agent with self-play, achieving 94% or 87% win rates against the ”Harder” (level 5) built-in Blizzard bot in Zerg vs. Zerg matches, with or without fog-of-war.", "title": "" } ]
scidocsrr
b3c211ae3bfb6c6d137f3acb66953472
Novel object-size measurement using the digital camera
[ { "docid": "85a076e58f4d117a37dfe6b3d68f5933", "text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.", "title": "" } ]
[ { "docid": "f9de4041343fb6c570e5cbce4cb1ff66", "text": "Do the languages that we speak affect how we experience the world? This question was taken up in a linguistic survey and two non-linguistic psychophysical experiments conducted in native speakers of English, Indonesian, Greek, and Spanish. All four of these languages use spatial metaphors to talk about time, but the particular metaphoric mappings between time and space vary across languages. A linguistic corpus study revealed that English and Indonesian tend to map duration onto linear distance (e.g., a long time), whereas Greek and Spanish preferentially map duration onto quantity (e.g., much time). Two psychophysical time estimation experiments were conducted to determine whether this cross-linguistic difference has implications for speakers’ temporal thinking. Performance on the psychophysical tasks reflected the relative frequencies of the ‘time as distance’ and ‘time as quantity’ metaphors in English, Indonesian, Greek, and Spanish. This was true despite the fact that the tasks used entirely nonlinguistic stimuli and responses. Results suggest that: (1.) The spatial metaphors in our native language may profoundly influence the way we mentally represent time. (2.) Language can shape even primitive, low-level mental processes such as estimating brief durations – an ability we share with babies and non-human animals.", "title": "" }, { "docid": "0de0093ab3720901d4704bfeb7be4093", "text": "Big Data analytics can revolutionize the healthcare industry. It can improve operational efficiencies, help predict and plan responses to disease epidemics, improve the quality of monitoring of clinical trials, and optimize healthcare spending at all levels from patients to hospital systems to governments. This paper provides an overview of Big Data, applicability of it in healthcare, some of the work in progress and a future outlook on how Big Data analytics can improve overall quality in healthcare systems.", "title": "" }, { "docid": "db4bb32f6fdc7a05da41e223afac3025", "text": "Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: \"noise\" characterization and suppression, and \"signal\" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources.", "title": "" }, { "docid": "6277d1a524d45908acfe4045df560f36", "text": "We present a novel method to track 3D models in color and depth data. To this end, we introduce approximations that accelerate the state-of-the-art in region-based tracking by an order of magnitude while retaining similar accuracy. Furthermore, we show how the method can be made more robust in the presence of depth data and consequently formulate a new joint contour and ICP tracking energy. We present better results than the state-of-the-art while being much faster then most other methods and achieving all of the above on a single CPU core.", "title": "" }, { "docid": "95a9fce47acac46550c0d559fbfbdffb", "text": "We employ the face recognition technology developed in house at face.com to a well accepted benchmark and show that without any tuning we are able to considerably surpass state of the art results. Much of the improvement is concentrated in the high-valued performance point of zero false positive matches, where the obtained recall rate almost doubles the best reported result to date. We discuss the various components and innovations of our system that enable this significant performance gap. These components include extensive utilization of an accurate 3D reconstructed shape model dealing with challenges arising from pose and illumination. In addition, discriminative models based on billions of faces are used in order to overcome aging and facial expression as well as low light and overexposure. Finally, we identify a challenging set of identification queries that might provide useful focus for future research. 1 Benchmark and results The LFW benchmark [6] has become the de-facto standard testbed for unconstrained face recognition with over 100 citations in the face recognition literature since its debut 3 years ago. Extensive work [15, 14, 13, 5, 7, 4, 10, 3, 8, 9, 11, 16] has been invested in improving the recognition score which has been considerably increased since the first non-trivial result of 72% accuracy. We employ face.com’s r2011b face recognition engine to the LFW benchmark without any dataset specific pre-tuning. The obtained mean accuracy is 91.3% ± 0.3, achieved on the test set (view 2) under the unrestricted LFW protocol. Figure 1 (a) presents the ROC curve obtained in comparison to previous results. Remarkably, much of the obtained improvement is achieved at the conservative performance range, i.e., at low False Acceptance Rates (FAR). 1face.com has a public API service [1] which currently employs a previous version of the engine. 1 ar X iv :1 10 8. 11 22 v1 [ cs .C V ] 4 A ug 2 01 1", "title": "" }, { "docid": "77a7d26ab1b8ae5170c55359053cff44", "text": "It certainly appears that there should be a relationship between concepts and meaning, but it is not entirely clear what this relation is. We shall assume that concepts are people's psychological representations of categories (e.g., apple, chair); whereas meanings are people's understandings of words and other linguistic expressions (e.g., \"apple\", \"large chair\"). 1 Currently, many cognitive scientists, especially psychologists, believe that concepts and meanings are at least roughly equivalent, with the meaning of an expression being its conceptual representation in human knowledge. From identifying the content of a concept, the meaning of the associated expression follows. Malt (1991) and Murphy (1991) review this position and various alternatives. In our paper, we shall argue that concepts and meanings differ substantially. Although they are related in important ways, the relationship is one of complementarity, not equivalence. To reach our conclusion, we shall review standard assumptions about concepts and meaning, challenge these assumptions, and present alternatives. The assumptions that we challenge, and that organize the paper, are: (1) propositional expressions represent concepts; (2) concepts are prototypes of exemplars; (3) concepts are decontextualized and universal in scope; (4) the meanings of words are concepts. We shall argue instead that perceptual symbols represent concepts; concepts are models for types of individuals in world models; concepts are contextualized and local in scope to situations; word meanings use concepts but are not concepts. Our tack in exploring these issues is to develop a theory of concepts and memory in the first three sections of the paper. In the spirit of cognitive linguistics, this theory utilizes perceptual representations and situational knowledge extensively. The final section compares concepts, as defined in our theory, with relatively well-accepted notions about meaning. We shall then assess whether concepts are equivalent to meaning, or whether they exhibit some other relationship. Please note that the bulk of this 'working paper' represents a theory of concepts and memory in the early stages of development. We are the first to acknowledge that the majority of our claims lack strong empirical support and that increased precision is necessary at the theoretical level. Although evidence exists for some aspects of our theory, many aspects rest on more of a rationalist analysis of how a cognitive system might compute concepts and meaning. This paper outlines our theory in its current form so that we can begin to examine its claims empirically and implement it computationally. A variety of experimental …", "title": "" }, { "docid": "6f854ac470ce9ffb615b5457bad2dcad", "text": "Efficient CNN designs like ResNets and DenseNet were proposed to improve accuracy vs efficiency trade-offs. They essentially increased the connectivity, allowing efficient information flow across layers. Inspired by these techniques, we propose to model connections between filters of a CNN using graphs which are simultaneously sparse and well connected. Sparsity results in efficiency while well connectedness can preserve the expressive power of the CNNs. We use a well-studied class of graphs from theoretical computer science that satisfies these properties known as Expander graphs. Expander graphs are used to model connections between filters in CNNs to design networks called X-Nets. We present two guarantees on the connectivity of X-Nets: Each node influences every node in a layer in logarithmic steps, and the number of paths between two sets of nodes is proportional to the product of their sizes. We also propose efficient training and inference algorithms, making it possible to train deeper and wider X-Nets effectively. Expander based models give a 4% improvement in accuracy on MobileNet over grouped convolutions, a popular technique, which has the same sparsity but worse connectivity. X-Nets give better performance trade-offs than the original ResNet and DenseNet-BC architectures. We achieve model sizes comparable to state-of-the-art pruning techniques using our simple architecture design, without any pruning. We hope that this work motivates other approaches to utilize results from graph theory to develop efficient network architectures.", "title": "" }, { "docid": "7b4704eeb740b6930a99848999fee6e6", "text": "Banach spaces, and more generally normed spaces, are endowed with two structures: a linear structure and a notion of limits, i.e., a topology. Many useful spaces are Banach spaces, and indeed, we saw many examples of those. In certain cases, however, one deals with vector spaces with a notion of convergence that is not normable. Perhaps the best example is the space of C ∞ 0 (K) functions that are compactly supported inside some open domain K (this space is the basis for the theory of distributions). Another such example is the space of continuous functions C(W) defined on some open set W ⊂ R n. In such cases, we need a more general construct that embodies both the vector space structure and a topology, without having a norm. In certain cases, the topology is not only not normable, but even not metrizable (there are situations in which it is metrizable but not normable). We start by recalling basic definition in topological spaces: Definition 3.1 — Topological space. A topological space (*#&-&5&) \"(9/) is a set S with a collection t of subsets (called the open sets) that contains both S and ï¿¿, and is closed under arbitrary union and finite intersections. A topological space is the most basic concept of a set endowed with a notion of neighborhood. Definition 3.2 — Open neighborhood. In a topological space (S,t), a neighborhood (%\"*\"2) of a point x is an open set that contains x. We will denote the collection of all the neighborhoods of x by N x = {U ∈ t ï¿¿ x ∈ U}.", "title": "" }, { "docid": "d742bec0d6fd34443866c2df5e108f2b", "text": "Automation of the surrounding environment of a modern human being allows increasing his wor k efficiency and comfort. There has been a significant developme nt in the area of an individual’s routine tasks and those can be automated. In the present times, we can find most o f the people clinging to their mobile phones and sm art devices throughout the day. Hence with the help of his comp anion – a mobile phone, some daily household tasks can be accomplished by personifying the use of the mobile phone. Analyzing the current smart phone market, novice mobile users are opting for Android based phones. It has b ecome a second name for a mobile phone in layman te rms. Home Automation System (HAS) has been designed for mobil e phones having Android platform to automate an 8 bit Bluetooth interfaced microcontroller which controls a number of home appliances like lights, fans, bulbs and man y more using on/off relay. This paper presents the automated app roach of controlling the devices in a household tha t could ease the tasks of using the traditional method of the switch . The most famous and efficient technology for shor t range wireless communicationBluetooth is used here to automate t he system. The HAS system for Android users is a st ep towards the ease of the tasks by controlling one to twenty four different appliances in any home environment.", "title": "" }, { "docid": "25c95104703177e11d5e1db46822c0aa", "text": "We propose a general object localization and retrieval scheme based on object shape using deformable templates. Prior knowledge of an object shape is described by a prototype template which consists of the representative contour/edges, and a set of probabilistic deformation transformations on the template. A Bayesian scheme, which is based on this prior knowledge and the edge information in the input image, is employed to find a match between the deformed template and objects in the image. Computational efficiency is achieved via a coarse-to-fine implementation of the matching algorithm. Our method has been applied to retrieve objects with a variety of shapes from images with complex background. The proposed scheme is invariant to location, rotation, and moderate scale changes of the template.", "title": "" }, { "docid": "0a6a7d8b6b99d521e9610aa0792402cc", "text": "Ajax is a new concept of web application development proposed in 2005. It is the acronym of Asynchronous JavaScript and XML. Once Ajax appeared, it is rapidly applied to the fields of Web development. Ajax application is different from the traditional Web development model, using asynchronous interaction. The client unnecessarily waits while the server processes the data submitted. So the use of Ajax can create Web user interface which is direct, highly available, richer, more dynamic and closer to a local desktop application. This article introduces the main technology and superiority of Ajax firstly, and then practices Web development using ASP.NET 2.0+Ajax. In this paper, Ajax is applied to the Website pass, which enables user to have better registration experience and enhances the user's enthusiasm. The registration functions are enhanced greatly as well. The experiments show that the Ajax Web application development model is superior to the traditional Web application development model significantly.", "title": "" }, { "docid": "52bce24f8ec738f9b9dfd472acd6b101", "text": "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5%, 80.6% and 40.7% respectively, which are the best reported results to date.", "title": "" }, { "docid": "b913385dfedc1c6557c11a9e5db1ce51", "text": "The design of a wireless communication system is dependent upon the propagation environment in which the system is to be used. Factors such as the time delay spread and the path loss of a radio channel affect the performance and reliability of a wireless system. These factors can be accurately measured through RF propagation measurements in the environments in which artemerging wireless technology is to be deployed~", "title": "" }, { "docid": "0398e0ea6f0bf40a90a152616f418016", "text": "The next flagship supercomputer in Japan, replacement of K supercomputer, is being designed toward general operation in 2020. Compute nodes, based on a manycore architecture, connected by a 6-D mesh/torus network is considered. A three level hierarchical storage system is taken into account. A heterogeneous operating system, Linux and a light-weight kernel, is designed to build suitable environments for applications. It cannot be possible without codesign of applications that the system software is designed to make maximum utilization of compute and storage resources. After a brief introduction of the post K supercomputer architecture, the design issues of the system software will be presented. Two big-data applications, genome processing and meteorological and global environmental predictions will be sketched out as target applications in the system software design. Then, it will be presented how these applications' demands affect the system software.", "title": "" }, { "docid": "38f386546b5f866d45ff243599bd8305", "text": "During the last two decades, Structural Equation Modeling (SEM) has evolved from a statistical technique for insiders to an established valuable tool for a broad scientific public. This class of analyses has much to offer, but at what price? This paper provides an overview on SEM, its underlying ideas, potential applications and current software. Furthermore, it discusses avoidable pitfalls as well as built-in drawbacks in order to lend support to researchers in deciding whether or not SEM should be integrated into their research tools. Commented findings of an internet survey give a “State of the Union Address” on SEM users and usage. Which kinds of models are preferred? Which software is favoured in current psychological research? In order to assist the reader on his first steps, a SEM first-aid kit is included. Typical problems and possible solutions are addressed, helping the reader to get the support he needs. Hence, the paper may assist the novice on the first steps and self-critically reminds the advanced reader of the limitations of Structural Equation Modeling", "title": "" }, { "docid": "e0edee10df7529ef31c1941075461963", "text": "Although grounded theory and qualitative content analysis are similar in some respects, they differ as well; yet the differences between the two have rarely been made clear in the literature. The purpose of this article was to clarify ambiguities and reduce confusion about grounded theory and qualitative content analysis by identifying similarities and differences in the two based on a literature review and critical reflection on the authors’ own research. Six areas of difference emerged: (a) background and philosophical base, (b) unique characteristics of each method, (c) goals and rationale of each method, (d) data analysis process, (e) outcomes of the research, and (f) evaluation of trustworthiness. This article provides knowledge that can assist researchers and students in the selection of appropriate research methods for their inquiries.", "title": "" }, { "docid": "74ecfe68112ba6309ac355ba1f7b9818", "text": "We present a novel approach to probabilistic time series forecasting that combines state space models with deep learning. By parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, our method retains desired properties of state space models such as data efficiency and interpretability, while making use of the ability to learn complex patterns from raw data offered by deep learning approaches. Our method scales gracefully from regimes where little training data is available to regimes where data from large collection of time series can be leveraged to learn accurate models. We provide qualitative as well as quantitative results with the proposed method, showing that it compares favorably to the state-of-the-art.", "title": "" }, { "docid": "5447d3fe8ed886a8792a3d8d504eaf44", "text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.", "title": "" }, { "docid": "4075eb657e87ad13e0f47ab36d33df54", "text": "MOTIVATION\nControlled vocabularies such as the Medical Subject Headings (MeSH) thesaurus and the Gene Ontology (GO) provide an efficient way of accessing and organizing biomedical information by reducing the ambiguity inherent to free-text data. Different methods of automating the assignment of MeSH concepts have been proposed to replace manual annotation, but they are either limited to a small subset of MeSH or have only been compared with a limited number of other systems.\n\n\nRESULTS\nWe compare the performance of six MeSH classification systems [MetaMap, EAGL, a language and a vector space model-based approach, a K-Nearest Neighbor (KNN) approach and MTI] in terms of reproducing and complementing manual MeSH annotations. A KNN system clearly outperforms the other published approaches and scales well with large amounts of text using the full MeSH thesaurus. Our measurements demonstrate to what extent manual MeSH annotations can be reproduced and how they can be complemented by automatic annotations. We also show that a statistically significant improvement can be obtained in information retrieval (IR) when the text of a user's query is automatically annotated with MeSH concepts, compared to using the original textual query alone.\n\n\nCONCLUSIONS\nThe annotation of biomedical texts using controlled vocabularies such as MeSH can be automated to improve text-only IR. Furthermore, the automatic MeSH annotation system we propose is highly scalable and it generates improvements in IR comparable with those observed for manual annotations.", "title": "" }, { "docid": "4bb0041bfd95fabc73f4c57f1cdadc4e", "text": "This paper describes an algorithm to calculate near-optimal minimum time trajectories for four wheeled omnidirectional vehicles, which can be used as part of a high-level path planner. The algorithm is based on a relaxed optimal control problem. It takes limited friction and vehicle dynamics into account, as encountered in high-performance omnidirectional vehicles. The low computational complexity makes the application in real-time feasible. An implementation of the algorithm on a real vehicle is presented and discussed. c © 2005 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
c07dcbc4d5cc84c8d967882ec95cb2f2
Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning
[ { "docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2", "text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.", "title": "" }, { "docid": "c78922c2c3ee5425701da2ecb67da14d", "text": "We present an analysis of security vulnerabilities in the domain name system (DNS) and the DNS security extensions (DNSSEC). DNS data that is provided by name servers lacks support for data origin authentication and data integrity. This makes DNS vulnerable to man in the middle (MITM) attacks, as well as a range of other attacks. To make DNS more robust, DNSSEC was proposed by the Internet Engineering Task Force (IETF). DNSSEC provides data origin authentication and integrity by using digital signatures. Although DNSSEC provides security for DNS data, it suffers from serious security and operational flaws. We discuss the DNS and DNSSEC architectures, and consider the associated security vulnerabilities", "title": "" }, { "docid": "9db9902c0e9d5fc24714554625a04c7a", "text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.", "title": "" } ]
[ { "docid": "35da724255bbceb859d01ccaa0dec3b1", "text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.", "title": "" }, { "docid": "831196d53c501bf34b1abc872f70e0e4", "text": "Digital images are the most prevalent way to spread a message. So the authenticity of images is very essential. But due to advancement of the technology the editing of images has become very effortless. Copy-move forgery is most basic technique to alter an image. In this one part of image is copied, called as snippet, and pasted within same image and most likely post-processing it. Considerable number of algorithms is proposed to detect different post-processing on snippet of image. In this paper novel approach is proposed to detect combination of different post-processing operations by single method. It is analyzed that block-based features method DCT is robust to Gaussian noise and JPEG compression, secondly the keypoint-based feature method SIFT is robust to rotation and scaling. Thus by combining SIFT and DCT we are able to detect forgery under post-processing operations of rotation, scaling, Gaussian noise, and JPEG compression and thus the efficiency to detect forgery improves.", "title": "" }, { "docid": "04ba17b4fc6b506ee236ba501d6cb0cf", "text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.", "title": "" }, { "docid": "df5c521e040c59ea2b9ce044fa68d864", "text": "We consider the problem of estimating real-time traffic conditions from sparse, noisy GPS probe vehicle data. We specifically address arterial roads, which are also known as the secondary road network (highways are considered the primary road network). We consider several estimation problems: historical traffic patterns, real-time traffic conditions, and forecasting future traffic conditions. We assume that the data available for these estimation problems is a small set of sparsely traced vehicle trajectories, which represents a small fraction of the total vehicle flow through the network. We present an expectation maximization algorithm that simultaneously learns the likely paths taken by probe vehicles as well as the travel time distributions through the network. A case study using data from San Francisco taxis is used to illustrate the performance of the algorithm.", "title": "" }, { "docid": "1caf2d15e1f9c6fcacfcb46d8fdfc5b3", "text": "Content Delivery Networks (CDNs) [79, 97] have received considerable research attention in the recent past. A few studies have investigated CDNs to categorize and analyze them, and to explore the uniqueness, weaknesses, opportunities, and future directions in this field. Peng presents an overview of CDNs [75]. His work describes the critical issues involved in designing and implementing an effective CDN, and surveys the approaches proposed in literature to address these problems. Vakali et al. [95] present a survey of CDN architecture and popular CDN service providers. The survey is focused on understanding the CDN framework and its usefulness. They identify the characteristics and current practices in the content networking domain, and present an evolutionary pathway for CDNs, in order to exploit the current content networking trends. Dilley et al. [29] provide an insight into the overall system architecture of the leading CDN, Akamai [1]. They provide an overview of the existing content delivery approaches and describe Akamai’s network infrastructure and its operations in detail. They also point out the technical challenges that are to be faced while constructing a global CDN like Akamai. Saroiu et al. [84] examine content delivery from the point of view of four content delivery systems: Hypertext Transfer Protocol (HTTP) Web traffic, the Akamai CDN, Gnutella [8, 25], and KaZaa [62, 66] peer-to-peer file sharing systems. They also present significant implications for large organizations, service providers, network infrastructure providers, and general content delivery providers. Kung et al. [60] describe a taxonomy for content networks and introduce a new class of content networks that perform “semantic aggregation and content-sensitive placement” of content. They classify content networks based on their attributes in two dimensions: content aggregation and content placement. Sivasubramanian et al. [89] identify the issues", "title": "" }, { "docid": "72977e6d2c3601519b4927c76e376fd1", "text": "PURPOSE OF REVIEW\nNutritional insufficiencies of nutrients such as omega-3 highly unsaturated fatty acids (HUFAs), vitamins and minerals have been linked to suboptimal developmental outcomes including attention deficit hyperactivity disorder (ADHD). Although the predominant treatment is currently psychostimulant medications, randomized clinical trials with omega-3 HUFAs have reported small-to-modest effects in reducing symptoms of ADHD in children despite arguable individual methodological and design misgivings.\n\n\nRECENT FINDINGS\nThis review presents, discusses and critically evaluates data and findings from meta-analytic and systematic reviews and clinical trials published within the last 12 months. Recent trajectories of this research are discussed, such as comparing eicosapentaenoic acid and docosahexaenoic acid and testing the efficacy of omega-3 HUFAs as an adjunct to methylphenidate. Discussion includes highlighting limitations and potential future directions such as addressing variable findings by accounting for other nutritional deficiencies and behavioural food intolerances.\n\n\nSUMMARY\nThe authors conclude that given the current economic burden of ADHD, estimated in the region of $77 billion in the USA alone, in addition to the fact that a proportion of patients with ADHD are either treatment resistant, nonresponders or withdraw from medication because of adverse side-effects, the investigation of nonpharmacological interventions including omega-3 HUFAs in clinical practice warrants extrapolating.", "title": "" }, { "docid": "1193515655256edf4c9b490fb5d9f03e", "text": "Long-term demand forecasting presents the first step in planning and developing future generation, transmission and distribution facilities. One of the primary tasks of an electric utility accurately predicts load demand requirements at all times, especially for long-term. Based on the outcome of such forecasts, utilities coordinate their resources to meet the forecasted demand using a least-cost plan. In general, resource planning is performed subject to numerous uncertainties. Expert opinion indicates that a major source of uncertainty in planning for future capacity resource needs and operation of existing generation resources is the forecasted load demand. This paper presents an overview of the past and current practice in longterm demand forecasting. It introduces methods, which consists of some traditional methods, neural networks, genetic algorithms, fuzzy rules, support vector machines, wavelet networks and expert systems.", "title": "" }, { "docid": "cf52c02c9aa4ca9274911f0098d6cb89", "text": "The individuation of areas that are more likely to be impacted by new events in volcanic regions is of fundamental relevance for mitigating possible consequences, both in terms of loss of human lives and material properties. For this purpose, the lava flow hazard maps are increasingly used to evaluate, for each point of a map, the probability of being impacted by a future lava event. Typically, these maps are computed by relying on an adequate knowledge about the volcano, assessed by an accurate analysis of its past behavior, together with the explicit simulation of thousands of hypothetical events, performed by a reliable computational model. In this paper, General-Purpose Computation with Graphics Processing Units (GPGPU) is applied, in conjunction with the SCIARA lava flow Cellular Automata model, to the process of building the lava invasion maps. Using different GPGPU devices, the paper illustrates some different implementation strategies and discusses numerical results obtained for a case study at Mt. Etna (Italy), Europe’s most active volcano.", "title": "" }, { "docid": "ef52c7d4c56ff47c8e18b42e0a757655", "text": "Microprocessors and memory systems suffer from a growing gap in performance. We introduce Active Pages, a computation model which addresses this gap by shifting data-intensive computations to the memory system. An Active Page consists of a page of data and a set of associated functions which can operate upon that data. We describe an implementation of Active Pages on RADram (Reconfigurable Architecture DRAM), a memory system based upon the integration of DRAM and reconfigurable logic. Results from the SimpleScalar simulator [BA97] demonstrate up to 1000X speedups on several applications using the RADram system versus conventional memory systems. We also explore the sensitivity of our results to implementations in other memory technologies.", "title": "" }, { "docid": "a7e6a2145b9ae7ca2801a3df01f42f5e", "text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.", "title": "" }, { "docid": "78ec561e9a6eb34972ab238a02fdb40a", "text": "OBJECTIVE\nTo evaluate the safety and efficacy of mass circumcision performed using a plastic clamp.\n\n\nMETHODS\nA total of 2013 males, including infants, children, adolescents, and adults were circumcised during a 7-day period by using a plastic clamp technique. Complications were analyzed retrospectively in regard to 4 different age groups. Postcircumcision sexual function and satisfaction rates of the adult males were also surveyed.\n\n\nRESULTS\nThe mean duration of circumcision was 3.6±1.2 minutes. Twenty-six males who were lost to follow-up were excluded from the study. The total complication rate was found to be 2.47% among the remaining 1987 males, with a mean age of 7.8±2.5 years. The highest complication rate (2.93%) was encountered among the children<2 years age, which was because of the high rate of buried penis (0.98%) and excessive foreskin (0.98%) observed in this group. The complication rates of older children, adolescents, and adults were slightly lower than the children<2 years age, at 2.39%, 2.51%, and 2.40%, respectively. Excessive foreskin (0.7%) was the most common complication observed after mass circumcision. Bleeding (0.6%), infection (0.55%), wound dehiscence (0.25%), buried penis (0.25%), and urine retention (0.1%) were other encountered complications. The erectile function and sexual libido in adolescents and adults was not affected by circumcision and a 96% satisfaction rate was obtained.\n\n\nCONCLUSIONS\nMass circumcision performed by a plastic clamp technique was found to be a safe and time-saving method of circumcising a large number of males at any age.", "title": "" }, { "docid": "279d6de6ed6ade25d5ac0ff3d1ecde49", "text": "This paper explores the relationship between TV viewership ratings for Scandinavian's most popular talk show, Skavlan and public opinions expressed on its Facebook page. The research aim is to examine whether the activity on social media affects the number of viewers per episode of Skavlan, how the viewers are affected by discussions on the Talk Show, and whether this creates debate on social media afterwards. By analyzing TV viewer ratings of Skavlan talk show, Facebook activity and text classification of Facebook posts and comments with respect to type of emotions and brand sentiment, this paper identifes patterns in the users' real-world and digital world behaviour.", "title": "" }, { "docid": "f3aa019816ae399c3fe834ffce3db53e", "text": "This paper presents a method to incorporate 3D line segments in vision based SLAM. A landmark initialization method that relies on the Plucker coordinates to represent a 3D line is introduced: a Gaussian sum approximates the feature initial state and is updated as new observations are gathered by the camera. Once initialized, the landmarks state is estimated along an EKF-based SLAM approach: constraints associated with the Plucker representation are considered during the update step of the Kalman filter. The whole SLAM algorithm is validated in simulation runs and results obtained with real data are presented.", "title": "" }, { "docid": "ac95ed317bfcde1fd9e146cdd0c50fe5", "text": "The development of literacy and reading proficiency is a building block of lifelong learning that must be supported both in the classroom and at home. While the promise of interactive learning technologies has widely been demonstrated, little is known about how an interactive robot might play a role in this development. We used eight design features based on recommendations from interest-development and human-robot-interaction literatures to design an in-home learning companion robot for children aged 11--12. The robot was used as a technology probe to explore families' (N=8) habits and views about reading, how a reading technology might be used, and how children perceived reading with the robot. Our results indicate reading with the learning companion to be a way to socially engage with reading, which may promote the development of reading interest and ability. We discuss design and research implications based on our findings.", "title": "" }, { "docid": "72845c1eebbe683bfb91db2ddd5b0fee", "text": "Sketch-based modeling strives to bring the ease and immediacy of drawing to the 3D world. However, while drawings are easy for humans to create, they are very challenging for computers to interpret due to their sparsity and ambiguity. We propose a data-driven approach that tackles this challenge by learning to reconstruct 3D shapes from one or more drawings. At the core of our approach is a deep convolutional neural network (CNN) that predicts occupancy of a voxel grid from a line drawing. This CNN provides an initial 3D reconstruction as soon as the user completes a single drawing of the desired shape. We complement this single-view network with an updater CNN that refines an existing prediction given a new drawing of the shape created from a novel viewpoint. A key advantage of our approach is that we can apply the updater iteratively to fuse information from an arbitrary number of viewpoints, without requiring explicit stroke correspondences between the drawings. We train both CNNs by rendering synthetic contour drawings from hand-modeled shape collections as well as from procedurally-generated abstract shapes. Finally, we integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Conference’17, July 2017, Washington, DC, USA © 2018 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn This is the authors version of the work. It is posted by permission of ACM for your personal use. Not for redistribution. The definite version will be published in PACMCGIT.", "title": "" }, { "docid": "373daff94b0867437e2211f460437a19", "text": "We live in an increasingly connected and automated society. Smart environments embody this trend by linking computers to everyday tasks and settings. Important features of such environments are that they possess a degree of autonomy, adapt themselves to changing conditions, and communicate with humans in a natural way. These systems can be found in offices, airports, hospitals, classrooms, or any other environment. This article discusses automation of our most personal environment: the home. There are several characteristics that are commonly found in smart homes. This type of environment assumes controls and coordinates a network of sensors and devices, relieving the inhabitants of this burden. Interaction with smart homes is in a form that is comfortable to people: speech, gestures, and actions take the place of windows, icons, menus, and pointers. We define a smart home as one that is able to acquire and apply knowledge about its inhabitants and their surroundings in order to adapt to the inhabitants and meet the goals of comfort and efficiency. Designing and implementing smart homes requires a unique breadth of knowledge not limited to a single discipline, but integrates aspects of machine learning, decision making, human-machine interfaces, wireless networking, mobile communications, databases, sensor networks, and pervasive computing. With these capabilities, the home can control many aspects of the environment such as climate, lighting, maintenance, and entertainment. Intelligent automation of these activities can reduce the amount of interaction required by inhabitants and reduce energy consumption and other potential operating costs. The same capabilities can be", "title": "" }, { "docid": "7110e68a420d10fa75a943d1c1f0bd42", "text": "This paper proposes a compact microstrip Yagi-Uda antenna for 2.45 GHz radio frequency identification (RFID) handheld reader applications. The proposed antenna is etched on a piece of FR4 substrate with an overall size of 65 mm × 55 mm ×1.6 mm and consists of a microstrip balun, a dipole, and a director. The ground plane is designed to act as a reflector that contributes to enhancing the antenna gain. The measured 10-dB return loss bandwidth and peak gain achieved by the proposed antenna are 380 MHz and 7.5 dBi, respectively. In addition, a parametric study is conducted to facilitate the design and optimization processes for engineers.", "title": "" }, { "docid": "fc9babe40365e5dc943fccf088f7a44f", "text": "The network performance of virtual machines plays a critical role in Network Functions Virtualization (NFV), and several technologies have been developed to address hardware-level virtualization shortcomings. Recent advances in operating system level virtualization and deployment platforms such as Docker have made containers an ideal candidate for high performance application encapsulation and deployment. However, Docker and other solutions typically use lower-performing networking mechanisms. In this paper, we explore the feasibility of using technologies designed to accelerate virtual machine networking with containers, in addition to quantifying the network performance of container-based VNFs compared to the state-of-the-art virtual machine solutions. Our results show that containerized applications can provide lower latency and delay variation, and can take advantage of high performance networking technologies previously only used for hardware virtualization.", "title": "" }, { "docid": "21aa2df33199b6fbdc64abd1ea65341b", "text": "AIM\nBefore an attempt is made to develop any population-specific behavioural change programme, it is important to know what the factors that influence behaviours are. The aim of this study was to identify what are the perceived determinants that attribute to young people's choices to both consume and misuse alcohol.\n\n\nMETHOD\nUsing a descriptive survey design, a web-based questionnaire based on the Theory of Triadic Influence was administered to students aged 18-29 years at one university in Northern Ireland.\n\n\nRESULTS\nOut of the total respondents ( n = 595), knowledge scores on alcohol consumption and the health risks associated with heavy episodic drinking were high (92.4%, n = 550). Over half (54.1%, n = 322) cited the Internet as their main source for alcohol-related information. The three most perceived influential factors of inclination to misuse alcohol were strains/conflict within the family home ( M = 2.98, standard deviation ( SD) = 0.18, 98.7%, n = 587), risk taking/curiosity behaviour ( M = 2.97, SD = 0.27, 97.3%, n = 579) and the desire not to be socially alienated ( M = 2.94, SD = 0.33, 96%, n = 571). Females were statistically significantly more likely to be influenced by desire not to be socially alienated than males (  p = .029). Religion and personal reasons were the most commonly cited reasons for not drinking.\n\n\nCONCLUSION\nFuture initiatives to reduce alcohol misuse and alcohol-related harms need to focus on changing social normative beliefs and attitudes around alcohol consumption and the family and environmental factors that influence the choice of young adult's alcohol drinking behaviour. Investment in multi-component interventions may be a useful approach.", "title": "" } ]
scidocsrr
2ece74ffbe20e11fc680c9437e6dce1f
A new gate driver integrated circuit for IGBT devices with advanced protections
[ { "docid": "30e89edb65cbf54b27115c037ee9c322", "text": "AbstructIGBT’s are available with short-circuit withstand times approaching those of bipolar transistors. These IGBT’s can therefore be protected by the same relatively slow-acting circuitry. The more efficient IGBT’s, however, have lower shortcircuit withstand times. While protection of these types of IGBT’s is not difficult, it does require a reassessment of the traditional protection methods used for the bipolar transistors. An in-depth discussion on the behavior of IGBT’s under different short-circuit conditions is carried out and the effects of various parameters on permissible short-circuit time are analyzed. The paper also rethinks the problem of providing short-circuit protection in relation to the special characteristics of the most efficient IGBT’s. The pros and cons of some of the existing protection circuits are discussed and, based on the recommendations, a protection scheme is implemented to demonstrate that reliable short-circuit protection of these types of IGBT’s can be achieved without difficulty in a PWM motor-drive application. volts", "title": "" } ]
[ { "docid": "4b9695da76b4ab77139549a4b444dae7", "text": "Wireless Sensor Network (WSN) is one of the key technologies of 21st century, while it is a very active and challenging research area. It seems that in the next coming year, thanks to 6LoWPAN, these wireless micro-sensors will be embedded in everywhere, because 6LoWPAN enables P2P connection between wireless nodes over IPv6. Nowadays different implementations of 6LoWPAN stacks are available so it is interesting to evaluate their performance in term of memory footprint and compliant with the RFC4919 and RFC4944. In this paper, we present a survey on the state-of-art of the current implementation of 6LoWPAN stacks such as uIP/Contiki, SICSlowpan, 6lowpancli, B6LoWPAN, BLIP, NanoStack and Jennic's stack. The key features of all these 6LoWPAN stacks will be established. Finally, we discuss the evolution of the current implementations of 6LoWPAN stacks.", "title": "" }, { "docid": "b24f07add0da3931b23f4a13ea6983b9", "text": "Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.", "title": "" }, { "docid": "47192ebdd7c5998359e5cf0a059b5434", "text": "In this paper, we present a hybrid approach for performing token and sentence levels Dialect Identification in Arabic. Specifically we try to identify whether each token in a given sentence belongs to Modern Standard Arabic (MSA), Egyptian Dialectal Arabic (EDA) or some other class and whether the whole sentence is mostly EDA or MSA. The token level component relies on a Conditional Random Field (CRF) classifier that uses decisions from several underlying components such as language models, a named entity recognizer and and a morphological analyzer to label each word in the sentence. The sentence level component uses a classifier ensemble system that relies on two independent underlying classifiers that model different aspects of the language. Using a featureselection heuristic, we select the best set of features for each of these two classifiers. We then train another classifier that uses the class labels and the confidence scores generated by each of the two underlying classifiers to decide upon the final class for each sentence. The token level component yields a new state of the art F-score of 90.6% (compared to previous state of the art of 86.8%) and the sentence level component yields an accuracy of 90.8% (compared to 86.6% obtained by the best state of the art system).", "title": "" }, { "docid": "7979bd1fca3e705837547aea5d1a0eb4", "text": "In court interpreting, the law distinguishes between the prescribed activity of what it considers translation – defined as an objective, mechanistic, transparent process in which the interpreter acts as a mere conduit of words – and the proscribed activity of interpretation, which involves interpreters decoding and attempting to convey their understanding of speaker meanings and intentions. This article discusses the practicability of this cut-and-dried legal distinction between translation and interpretation and speculates on the reasons for its existence. An attempt is made to illustrate some of the moral dilemmas that confront court interpreters, and an argument is put forward for a more realist understanding of their role and a major improvement in their professional status; as recognized professionals, court interpreters can more readily assume the latitude they need in order to ensure effective communication in the courtroom. Among members of the linguistic professions, the terms interpretation and interpreting are often used interchangeably to refer to the oral transfer of meaning between languages, as opposed to translation, which is reserved for the written exercise. Interpretation, however, becomes a potentially charged and ambiguous term in the judicial context, where it refers to a specific judicial process. This process is performed intralingually, in the language of the relevant legal system, and effected in accordance with a number of rules and presumptions for determining the ‘true’ meaning of a written document. Hence the need to adopt a rigorous distinction between interpreting as an interlingual process and interpretation as the act of conveying one’s understanding of meanings and intentions within the same language in order to avoid misunderstanding in the judicial context. Morris (1993a) discusses the attitude of members of the legal community to the activities and status of court interpreters, with particular reference to English-speaking countries. The discussion is based on an extensive survey of both historical and modern English-language law reports of cases in which issues of interlinguistic interpreting were addressed explicitly. The comments in these reports record the beliefs, ISSN 1355-6509 © St. Jerome Publishing, Manchester The Moral Dilemmas of Court Interpreting 26 attitudes and arguments of legal practitioners, mainly lawyers and judges, at different periods in history and in various jurisdictions. By and large, they reflect negative judicial views of the interpreting process and of those who perform it, in the traduttore traditore tradition, spanning the gamut from annoyance to venom, with almost no understanding of the linguistic issues and dilemmas involved. Legal practitioners, whose own performance, like that of translators and interpreters, relies on the effective use and manipulation of language, were found to deny interpreters the same latitude in understanding and expressing concepts that they themselves enjoy. Thus they firmly state that, when rendering meaning from one language to another, court interpreters are not to interpret – this being an activity which only lawyers are to perform, but to translate – a term which is defined, sometimes expressly and sometimes by implication, as rendering the speaker’s words verbatim. When it comes to court interpreting, then, the law distinguishes between the prescribed activity of what it calls translation – defined as an objective, mechanistic, transparent process in which the interpreter acts as a mere conduit of words – and the proscribed activity of interpretation, which involves interpreters decoding and attempting to convey their understanding of speaker meanings and intentions. In the latter case, the interpreter is perceived as assuming an active role in the communication process, something that is anathema to lawyers and judges. The law’s attitude to interpreters is at odds with the findings of current research in communication which recognizes the importance of context in the effective exchange of messages: it simply does not allow interpreters to use their discretion or act as mediators in the judicial process. The activity of interpretation, as distinct from translation, is held by the law to be desirable and acceptable for jurists, but utterly inappropriate and prohibited for linguists. The law continues to proscribe precisely those aspects of the interpreting process which enable it to be performed with greater accuracy because they have two undesirable side effects from the legal point of view: one is to highlight the interpreter’s presence and contribution, the other is to challenge and potentially undermine the performance of the judicial participants in forensic activities. 1. Interpretation as a communicative process The contemporary view of communication, of which interlingual interpretation is but one particularly salient form, sees all linguistic acts of communication as involving (or indeed, as being tantamount to) acts of translation, whether or not they involve different linguistic systems. Similarly, modern translation theorists see all interlingual translation as", "title": "" }, { "docid": "43b18a9fe6c1c67109ea7ee27285714b", "text": "Nonlinear dimensionality reduction methods have demonstrated top-notch performance in many pattern recognition and image classification tasks. Despite their popularity, they suffer from highly expensive time and memory requirements, which render them inapplicable to large-scale datasets. To leverage such cases we propose a new method called “Path-Based Isomap”. Similar to Isomap, we exploit geodesic paths to find the low-dimensional embedding. However, instead of preserving pairwise geodesic distances, the low-dimensional embedding is computed via a path-mapping algorithm. Due to the much fewer number of paths compared to number of data points, a significant improvement in time and memory complexity with a comparable performance is achieved. The method demonstrates state-of-the-art performance on well-known synthetic and real-world datasets, as well as in the presence of noise.", "title": "" }, { "docid": "43e3d3639d30d9e75da7e3c5a82db60a", "text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.", "title": "" }, { "docid": "7c98d4c1ab375526c426f8156650cb22", "text": "Online privacy remains an ongoing source of debate in society. Sensitive to this, many web platforms are offering users greater, more granular control over how and when their information is revealed. However, recent research suggests that information control mechanisms of this sort are not necessarily of economic benefit to the parties involved. We examine the use of these mechanisms and their economic consequences, leveraging data from one of the world's largest global crowdfunding platforms, where contributors can conceal their identity or contribution amounts from public display. We find that information hiding is more likely when contributors are under greater scrutiny or exhibiting “undesirable” behavior. We also identify an anchoring effect from prior contributions, which is eliminated when earlier contributors conceal their amounts. Subsequent analyses indicate that a nuanced approach to the design and provision of information control mechanisms, such as varying default settings based on contribution amounts, can help promote larger contributions.", "title": "" }, { "docid": "1f46ea05e58da0885805247a1f107f83", "text": "Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at https://github.com/szagoruyko/attention-transfer.", "title": "" }, { "docid": "e4db92a51368be3a742efb42c0ac68f1", "text": "This paper reviews the current knowledge of microbial processes affecting C sequestration in agroecosystems. The microbial contribution to soil C storage is directly related to microbial community dynamics and the balance between formation and degradation of microbial byproducts. Soil microbes also indirectly influence C cycling by improving soil aggregation, which physically protects soil organic matter (SOM). Consequently, the microbial contribution to C sequestration is governed by the interactions between the amount of microbial biomass, microbial community structure, microbial byproducts, and soil properties such as texture, clay mineralogy, pore-size distribution, and aggregate dynamics. The capacity of a soil to protect microbial biomass and microbially derived organic matter (MOM) is directly and/or indirectly (i.e., through physical protection by aggregates) related to the reactive properties of clays. However, the stabilization of MOM in the soil is also related to the efficiency with which microorganisms utilize substrate C and the chemical nature of the byproducts they produce. Crop rotations, reduced or no-tillage practices, organic farming, and cover crops increase total microbial biomass and shift the community structure toward a more fungal-dominated community, thereby enhancing the accumulation of MOM. A quantitative and qualitative improvement of SOM is generally observed in agroecosystems favoring a fungal-dominated community, but the mechanisms leading to this improvement are not completely understood. Gaps within our knowledge on MOM-C dynamics and how they are related to soil properties and agricultural practices are identified. GREATER THAN two-thirds of the organic C stored in terrestrial ecosystems is contained in SOM, with the net flux of C from soils to the atmosphere being on the order of 60 Pg C yr (Schlesinger, 1997). The historical loss of soil C due to intensive cultivation is estimated to be about 55 Pg C or 25% of the original C present in virgin, uncultivated soils, and has contributed significantly to CO2 release to the atmosphere on a global scale (Cole et al., 1997). The potential for agricultural soils to regain some of this lost C is being evaluated as a means to improve soil fertility, reduce erosion, and mitigate CO2 emissions. Increasing the potential for agricultural soils to sequester C requires a thorough understanding of the underlying processes and mechanisms controlling soil C levels, for which a great deal of knowledge already exists. Previous reviews have examined the relationships between microbial communities and SOM decomposition (Scow, 1997), management controls on soil C (Paustian et al., 1997), and the macromolecular composition of SOM (Kogel-Knabner, 2002). Here, we focus specifically on how soil bacteria and fungi may differentially influence the formation and stabilization of different SOM components in agricultural soils via differences in metabolism, the recalcitrance of microbial products, and interactions with soil physical properties (i.e., texture, mineralogy, and structure). Soil C levels are fundamentally determined by the balance between organic matter inputs, primarily as plant residues, roots, and root exudates, and organic matter losses due to decomposition, erosion, and leaching. Bacteria and fungi generally comprise .90% of the total soil microbial biomass, and they are responsible for the majority of SOM decomposition. Since soil microbial communities are key regulators of SOM dynamics and nutrient availability, shifts in microbial community composition and function (e.g., substrate utilization) in response to different agricultural management practices may play an important role in determining rates of C loss from the soil. The ratio of fungal:bacterial biomass has been shown to be particularly sensitive to soil disturbance, with lower ratios associated with increased intensity of cultivation (Bailey et al., 2002; Beare et al., 1992; Frey et al., 1999), increased grazing pressure (Bardgett et al., 1996, 1998), and increased N fertilization inputs (Bardgett and McAlister, 1999; Bardgett et al., 1996, 1999; Frey et al., 2004). In addition, fungal: bacterial biomass ratios were found to increase with successional age in a semiarid grassland community (Klein et al., 1996) and along an Alaskan forest chronosequence (Ohtonen et al., 1999). Substrate quality also alters fungal:bacterial ratios, with low quality substrates (high C/N) favoring fungi and high quality (low C/N) substrates favoring bacteria (Bossuyt et al., 2001). Organic C taken up by the microbial biomass is partitioned between microbial cell biomass production, metabolite excretion, and respiration (Fig. 1). The degree to which MOM accumulates in soil depends on a balance between production and decomposition of microbial products, that is: (1) the microbial growth efficiency (MGE), the efficiency with which substrates are incorporated into bacterial and fungal biomass and byproducts, (2) the degree of protection of microbial biomass in the soil structure, and (3) the rate at which bacterial and fungal byproducts are decomposed by other microorganisms. The proportion of substrate C retained as biomass versus respired as CO2 depends on MGE and the degree of protection of microbial biomass; the lower the MGE or the less protected the biomass, the more MOM-C is lost as CO2 (Fig. 1, Step I). Substrate C can J. Six and K.M. Batten, Dep. of Plant Sciences, Univ. of California, Davis, CA 95616; S.D. Frey and R.K. Thiet, Dep. of Natural Resources, Univ. of New Hampshire, Durham, NH 03824. Received 1 Nov. 2004. *Corresponding author ( jwsix@ucdavis.edu). Published in Soil Sci. Soc. Am. J. 70:555–569 (2006). Soil Biology & Biochemistry doi:10.2136/sssaj2004.0347 a Soil Science Society of America 677 S. Segoe Rd., Madison, WI 53711 USA Abbreviations: CT, conventional tillage; LF, light fraction; MAP, mean annual precipitation; MAT, mean annual temperature; MGE, microbial growth efficiency; MOM, microbially derived organic matter; MT, minimum tillage; NT, no-tillage; POM, plant-derived organic matter; SOM, soil organic matter. R e p ro d u c e d fr o m S o il S c ie n c e S o c ie ty o f A m e ri c a J o u rn a l. P u b lis h e d b y S o il S c ie n c e S o c ie ty o f A m e ri c a . A ll c o p y ri g h ts re s e rv e d . 555 Published online February 27, 2006", "title": "" }, { "docid": "0116d685010d09b88ef35b8ba0fa973a", "text": "Recurrent neural networks (RNNs) are widely used to model sequential data but their non-linear dependencies between sequence elements prevent parallelizing training over sequence length. We show the training of RNNs with only linear sequential dependencies can be parallelized over the sequence length using the parallel scan algorithm, leading to rapid training on long sequences even with small minibatch size. We develop a parallel linear recurrence CUDA kernel and show that it can be applied to immediately speed up training and inference of several state of the art RNN architectures by up to 9x. We abstract recent work on linear RNNs into a new framework of linear surrogate RNNs and develop a linear surrogate model for the long short-term memory unit, the GILR-LSTM, that utilizes parallel linear recurrence. We extend sequence learning to new extremely long sequence regimes that were previously out of reach by successfully training a GILR-LSTM on a synthetic sequence classification task with a one million timestep dependency.", "title": "" }, { "docid": "0b0723466d6fc726154befea8a1d7398", "text": "● Volume of pages makes efficient WWW navigation difficult ● Aim: To analyse users' navigation history to generate tools that increase navigational efficiency – ie. Predictive server prefetching ● Provides a mathematical foundation to several concepts", "title": "" }, { "docid": "90b1d0a8670e74ff3549226acd94973e", "text": "Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web.", "title": "" }, { "docid": "bde70da078bba2a63899cc7eb2a9aaf9", "text": "In the past few years, cloud computing develops very quickly. A large amount of data are uploaded and stored in remote public cloud servers which cannot fully be trusted by users. Especially, more and more enterprises would like to manage their data by the aid of the cloud servers. However, when the data outsourced in the cloud are sensitive, the challenges of security and privacy becomes urgent for wide deployment of the cloud systems. This paper proposes a secure data sharing scheme to ensure the privacy of data owner and the security of the outsourced cloud data. The proposed scheme provides flexible utility of data while solving the privacy and security challenges for data sharing. The security and efficiency analysis demonstrate that the designed scheme is feasible and efficient. At last, we discuss its application in electronic health record.", "title": "" }, { "docid": "b7fc7aa3a0824c71bc3b00f335b7b65e", "text": "In this paper we advocate the use of device-to-device (D2D) communications in a LoRaWAN Low Power Wide Area Network (LPWAN). After overviewing the critical features of the LoRaWAN technology, we discuss the pros and cons of enabling the D2D communications for it. Subsequently we propose a network-assisted D2D communications protocol and show its feasibility by implementing it on top of a LoRaWAN-certified commercial transceiver. The conducted experiments show the performance of the proposed D2D communications protocol and enable us to assess its performance. More precisely, we show that the D2D communications can reduce the time and energy for data transfer by 6 to 20 times compared to conventional LoRaWAN data transfer mechanisms. In addition, the use of D2D communications may have a positive effect on the network by enabling spatial re-use of the frequency resources. The proposed LoRaWAN D2D communications can be used for a wide variety of applications requiring high coverage, e.g. use cases in distributed smart grid deployments for management and trading.", "title": "" }, { "docid": "17a11a48d3ee024b8a606caf2c028986", "text": "For evaluating or training different kinds of vision algorithms, a large amount of precise and reliable data is needed. In this paper we present a system to create extended synthetic sequences of traffic environment scenarios, associated with several types of ground truth data. By integrating vehicle dynamics in a configuration tool, and by using path-tracing in an external rendering engine to render the scenes, a system is created that allows ongoing and flexible creation of highly realistic traffic images. For all images, ground truth data is provided for depth, optical flow, surface normals and semantic scene labeling. Sequences that are produced with this system are more varied and closer to natural images than other synthetic datasets before.", "title": "" }, { "docid": "c0d7dcda032d796c87ab26beb31f6e24", "text": "4 Improved compression algorithm based on the Burrows–Wheeler transform 61 4.1 Modifications of the basic version of the compression algorithm. 61 5 Conclusions 141 iii Acknowledgements 145 Bibliography 147 Appendices 161 A Silesia corpus 163 B Implementation details 167 C Detailed options of examined compression programs 173 D Illustration of the properties of the weight functions 177 E Detailed compression results for files of different sizes and similar contents 185 List of Symbols and Abbreviations 191 List of Figures 195 List of Tables 198 Index 200 Chapter 1 Preface I am now going to begin my story (said the old man), so please attend. Contemporary computers process and store huge amounts of data. Some parts of these data are excessive. Data compression is a process that reduces the data size, removing the excessive information. Why is a shorter data sequence often more suitable? The answer is simple: it reduces the costs. A full-length movie of high quality could occupy a vast part of a hard disk. The compressed movie can be stored on a single CD-ROM. Large amounts of data are transmitted by telecommunication satellites. Without compression we would have to launch many more satellites that we do to transmit the same number of television programs. The capacity of Internet links is also limited and several methods reduce the immense amount of transmitted data. Some of them, as mirror or proxy servers, are solutions that minimise a number of transmissions on long distances. The other methods reduce the size of data by compressing them. Multimedia is a field in which data of vast sizes are processed. The sizes of text documents and application files also grow rapidly. Another type of data for which compression is useful are database tables. Nowadays, the amount of information stored in databases grows fast, while their contents often exhibit much redundancy. Data compression methods can be classified in several ways. One of the most important criteria of classification is whether the compression algorithm 1 2 CHAPTER 1. PREFACE removes some parts of data which cannot be recovered during the decompres-sion. The algorithms removing irreversibly some parts of data are called lossy, while others are called lossless. The lossy algorithms are usually used when a perfect consistency with the original data is not necessary after the decom-pression. Such a situation occurs for example in compression of video or picture data. If the recipient of the video …", "title": "" }, { "docid": "3fac31e0592c23c4c2f3aba942389fde", "text": "This paper proposes a method formodelling and simulation of photovoltaic arrays. The method is used to obtain the parameters of the array model using its datasheet information. To reduce computational time, the input parameters are reduced to four and the values of shunt resistance Rp and series resistance Rs are estimated by simulated annealing optimization method. Then we draw I-V and P-V curves at different irradiance levels. Lowcomplexityanalogue MPPT circuit can be developedby usingtwo voltage approximation lines (VALs) that approximate the maximum power point (MPP) locus.In this paper, a fast and low cost analog MPPT method for low power PVsystems is proposed.The Simulation results coincide with experimental results at different PV systems to validate the powerful of the proposed method.", "title": "" }, { "docid": "14fb125a67b4cfe1d2458bc2fb0d16f2", "text": "Predicting potential credit default accounts in advance is challenging. Traditional statistical techniques typically cannot handle large amounts of data and the dynamic nature of fraud and humans. To tackle this problem, recent research has focused on artificial and computational intelligence based approaches. In this work, we present and validate a heuristic approach to mine potential default accounts in advance where a risk probability is precomputed from all previous data and the risk probability for recent transactions are computed as soon they happen. Beside our heuristic approach, we also apply a recently proposed machine learning approach that has not been applied previously on our targeted dataset [15]. As a result, we find that these applied approaches outperform existing state-of-the-art", "title": "" }, { "docid": "b24772af47f76db0f19ee281cccaa03f", "text": "We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.", "title": "" } ]
scidocsrr
3e9de364ed172cadb9daa467d77cc5d6
Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games
[ { "docid": "45940a48b86645041726120fb066a1fa", "text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.", "title": "" }, { "docid": "75119d6caf5dff0f93db968585d580c5", "text": "Model-based reinforcement learning techniques have historically encountered a number of difficulties scaling up to large observation spaces. One promising approach has been to decompose the model learning task into a number of smaller, more manageable sub-problems by factoring the observation space. Typically, many different factorizations are possible, which can make it difficult to select an appropriate factorization without extensive testing. In this paper we introduce the class of recursively decomposable factorizations, and show how exact Bayesian inference can be used to efficiently guarantee predictive performance close to the best factorization in this class. We demonstrate the strength of this approach by presenting a collection of empirical results for 20 different Atari 2600 games.", "title": "" } ]
[ { "docid": "d0bb1b3fc36016b166eb9ed25cb7ee61", "text": "Informed driving is increasingly becoming a key feature for increasing the sustainability of taxi companies. The sensors that are installed in each vehicle are providing new opportunities for automatically discovering knowledge, which, in return, delivers information for real-time decision making. Intelligent transportation systems for taxi dispatching and for finding time-saving routes are already exploring these sensing data. This paper introduces a novel methodology for predicting the spatial distribution of taxi-passengers for a short-term time horizon using streaming data. First, the information was aggregated into a histogram time series. Then, three time-series forecasting techniques were combined to originate a prediction. Experimental tests were conducted using the online data that are transmitted by 441 vehicles of a fleet running in the city of Porto, Portugal. The results demonstrated that the proposed framework can provide effective insight into the spatiotemporal distribution of taxi-passenger demand for a 30-min horizon.", "title": "" }, { "docid": "381a11fe3d56d5850ec69e2e9427e03f", "text": "We present an approximation algorithm that takes a pool of pre-trained models as input and produces from it a cascaded model with similar accuracy but lower average-case cost. Applied to state-of-the-art ImageNet classification models, this yields up to a 2x reduction in floating point multiplications, and up to a 6x reduction in average-case memory I/O. The auto-generated cascades exhibit intuitive properties, such as using lower-resolution input for easier images and requiring higher prediction confidence when using a computationally cheaper model.", "title": "" }, { "docid": "56ff8aa7934ed264908f42025d4c175b", "text": "The identification of design patterns as part of the reengineering process can convey important information to the designer. However, existing pattern detection methodologies generally have problems in dealing with one or more of the following issues: identification of modified pattern versions, search space explosion for large systems and extensibility to novel patterns. In this paper, a design pattern detection methodology is proposed that is based on similarity scoring between graph vertices. Due to the nature of the underlying graph algorithm, this approach has the ability to also recognize patterns that are modified from their standard representation. Moreover, the approach exploits the fact that patterns reside in one or more inheritance hierarchies, reducing the size of the graphs to which the algorithm is applied. Finally, the algorithm does not rely on any pattern-specific heuristic, facilitating the extension to novel design structures. Evaluation on three open-source projects demonstrated the accuracy and the efficiency of the proposed method", "title": "" }, { "docid": "39fb2d2bcea6c4207ee0afab4622f2ed", "text": "BACKGROUND\nThe Golden Gate Bridge (GGB) is a well-known \"suicide magnet\" and the site of approximately 30 suicides per year. Recently, a suicide barrier was approved to prevent further suicides.\n\n\nAIMS\nTo estimate the cost-effectiveness of the proposed suicide barrier, we compared the proposed costs of the barrier over a 20-year period ($51.6 million) to estimated reductions in mortality.\n\n\nMETHOD\nWe reviewed San Francisco and Golden Gate Bridge suicides over a 70-year period (1936-2006). We assumed that all suicides prevented by the barrier would attempt suicide with alternative methods and estimated the mortality reduction based on the difference in lethality between GGB jumps and other suicide methods. Cost/benefit analyses utilized estimates of value of statistical life (VSL) used in highway projects.\n\n\nRESULTS\nGGB suicides occur at a rate of approximately 30 per year, with a lethality of 98%. Jumping from other structures has an average lethality of 47%. Assuming that unsuccessful suicides eventually committed suicide at previously reported (12-13%) rates, approximately 286 lives would be saved over a 20-year period at an average cost/life of approximately $180,419 i.e., roughly 6% of US Department of Transportation minimal VSL estimate ($3.2 million).\n\n\nCONCLUSIONS\nCost-benefit analysis suggests that a suicide barrier on the GGB would result in a highly cost-effective reduction in suicide mortality in the San Francisco Bay Area.", "title": "" }, { "docid": "66aff99642972dbe0280c83e4d702e96", "text": "We develop a workload model based on the observed behavior of parallel computers at the San Diego Supercomputer Center and the Cornell Theory Center. This model gives us insight into the performance of strategies for scheduling moldable jobs on space-sharing parallel computers. We find that Adaptive Static Partitioning (ASP), which has been reported to work well for other workloads, does not perform as well as strategies that adapt better to system load. The best of the strategies we consider is one that explicitly reduces allocations when load is high (a variation of Sevcik's (1989) A+ strategy).", "title": "" }, { "docid": "ebea79abc60a5d55d0397d21f54cc85e", "text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.", "title": "" }, { "docid": "72600a23cc70d9cc3641cbfc7f23ba4d", "text": "Primary cicatricial alopecias (PCAs) are a rare, but important, group of disorders that cause irreversible damage to hair follicles resulting in scarring and permanent hair loss. They may also signify an underlying systemic disease. Thus, it is of paramount importance that clinicians who manage patients with hair loss are able to diagnose these disorders accurately. Unfortunately, PCAs are notoriously difficult conditions to diagnose and treat. The aim of this review is to present a rational and pragmatic guide to help clinicians in the professional assessment, investigation and diagnosis of patients with PCA. Illustrating typical clinical and histopathological presentations of key PCA entities we show how dermatoscopy can be profitably used for clinical diagnosis. Further, we advocate the search for loss of follicular ostia as a clinical hallmark of PCA, and suggest pragmatic strategies that allow rapid formulation of a working diagnosis.", "title": "" }, { "docid": "1ac03a7890a0145a8492a881caec4005", "text": "The rapid growth of data and data sharing have been driven an evolution in distributed storage infrastructure. The need for sensitive data protection and the capacity to handle massive data sets have encouraged the research and development of secure and scalable storage systems. This paper identifies major security issues and requirements of data protection related to distributed data storage systems. We classify the security services and techniques in existing or proposed storage systems. We then discuss potential research topics and future trends.", "title": "" }, { "docid": "cefcd78be7922f4349f1bb3aa59d2e1d", "text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …", "title": "" }, { "docid": "79ea2c1566b3bb1e27fe715b1a1a385b", "text": "The number of research papers available is growing at a staggering rate. Researchers need tools to help them find the papers they should read among all the papers published each year. In this paper, we present and experiment with hybrid recommender algorithms that combine Collaborative Filtering and Content-based. Filtering to recommend research papers to users. Our hybrid algorithms combine the strengths of each filtering approach to address their individual weaknesses. We evaluated our algorithms through offline experiments on a database of 102, 000 research papers, and through an online experiment with 110 users. For both experiments we used a dataset created from the CiteSeer repository of computer science research papers. We developed separate English and Portuguese versions of the interface and specifically recruited American and Brazilian users to test for cross-cultural effects. Our results show that users value paper recommendations, that the hybrid algorithms can be successfully combined, that different algorithms are more suitable for recommending different kinds of papers, and that users with different levels of experience perceive recommendations differently These results can be applied to develop recommender systems for other types of digital libraries.", "title": "" }, { "docid": "0227971523811a3f36f13eca3b8465c7", "text": "High-dynamic range (HDR) images are commonly used in computer graphics for accurate rendering. However, it is inefficient to store these images because of their large data size. Although vector quantization approach can be used to compress them, a large number of representative colors are still needed to preserve acceptable image quality. This paper presents an efficient color quantization approach to compress HDR images. In the proposed approach, a 1D/2D neighborhood structure is defined for the self-organizing map (SOM) approach and the SOM approach is then used to train a color palette. Afterward, a virtual color palette that has more codevectors is simulated by interpolating the trained color palette. The interpolation process is hardware supported in the current graphics hardware. Hence, there is no need to store the virtual color palette as the representative colors are constructed on the fly. Experimental results show that our approach can obtain good image quality with a moderate color palette.", "title": "" }, { "docid": "b72e2a03a7508cf1394ba80d9d9fc009", "text": "Accurate mediastinal lymph node dissection during thoracotomy is mandatory for staging and for adjuvant therapy in lung cancer. Pre-therapeutic staging for neoadjuvant therapy or for video assisted thoracoscopic resection of lung cancer is achieved usually by CT-scan and mediastinoscopy. However, these methods do not reach the accuracy of open nodal dissection. Therefore we developed a technique of radical video-assisted mediastinoscopic lymphadenectomy (VAMLA). This study was designed to show that VAMLA is feasible and that radicality of lymphadenectomy is comparable to the open procedure.In a prospective study all VAMLA procedures were registered and followed up in a database. Specimens of VAMLA were analysed by a single pathologist. Lymph nodes were counted and compared to open lymphadenectomy. The weight of the dissected tissue was documented. In patients receiving tumour resection subsequently to VAMLA, radicality of the previous mediastinoscopic dissection was controlled during thoracotomy.37 patients underwent video-assisted mediastinoscopy from June 1999 to April 2000. Mean duration of anaesthesia was 84.6 (SD 35.8) minutes.In 7 patients radical lymphadenectomy was not intended because of bulky nodal disease or benign disease. The remaining 30 patients underwent complete systematic nodal dissection as VAMLA.18 patients received tumour resection subsequently (12 right- and 6 left-sided thoracotomies). These thoracotomies allowed open re-dissection of 12 paratracheal regions, 10 of which were found free of lymphatic tissue. In two patients, 1 and 2 left over paratracheal nodes were counted respectively. 10/18 re-dissected subcarinal regions were found to be radically dissected by VAMLA. In 6 patients one single node and in the remaining 2 cases 5 and 8 nodes were found, respectively. However these counts also included nodes from the ipsilateral main bronchus. None of these nodes was positive for tumour.Average weight of the tissue that was harvested by VAMLA was 10.1 g (2.2-23.7, SD 6.3). An average number of 20.5 (6-60, SD 12.5) nodes per patient were counted in the specimens. This is comparable to our historical data from open lymphadenectomy.One palsy of the recurrent nerve in a patient with extensive preparation of the nerve and resection of 11 left-sided enlarged nodes was the only severe complication in this series.VAMLA seems to accomplish mediastinal nodal dissection comparable to open lymphadenectomy and supports video assisted surgery for lung cancer. In neoadjuvant setting a correct mediastinal N-staging is achieved.", "title": "" }, { "docid": "d69573f767b2e72bcff5ed928ca8271c", "text": "This article provides a novel analytical method of magnetic circuit on Axially-Laminated Anisotropic (ALA) rotor synchronous reluctance motor when the motor is magnetized on the d-axis. To simplify the calculation, the reluctance of stator magnet yoke and rotor magnetic laminations and leakage magnetic flux all are ignored. With regard to the uneven air-gap brought by the teeth and slots of the stator and rotor, the method resolves the problem with the equivalent air-gap length distribution function, and clarifies the magnetic circuit when the stator teeth are saturated or unsaturated. In order to conduct exact computation, the high order harmonics of the stator magnetic potential are also taken into account.", "title": "" }, { "docid": "437e4883116d3e2cf8ab1fe3b571d3f6", "text": "An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs.", "title": "" }, { "docid": "a79f9ad24c4f047d8ace297b681ccf0a", "text": "BACKGROUND\nLe Fort III distraction advances the Apert midface but leaves the central concavity and vertical compression untreated. The authors propose that Le Fort II distraction and simultaneous zygomatic repositioning as a combined procedure can move the central midface and lateral orbits in independent vectors in order to improve the facial deformity. The purpose of this study was to determine whether this segmental movement results in more normal facial proportions than Le Fort III distraction.\n\n\nMETHODS\nComputed tomographic scan analyses were performed before and after distraction in patients undergoing Le Fort III distraction (n = 5) and Le Fort II distraction with simultaneous zygomatic repositioning (n = 4). The calculated axial facial ratios and vertical facial ratios relative to the skull base were compared to those of unoperated Crouzon (n = 5) and normal (n = 6) controls.\n\n\nRESULTS\nWith Le Fort III distraction, facial ratios did not change with surgery and remained lower (p < 0.01; paired t test comparison) than normal and Crouzon controls. Although the face was advanced, its shape remained abnormal. With the Le Fort II segmental movement procedure, the central face advanced and lengthened more than the lateral orbit. This differential movement changed the abnormal facial ratios that were present before surgery into ratios that were not significantly different from normal controls (p > 0.05).\n\n\nCONCLUSION\nCompared with Le Fort III distraction, Le Fort II distraction with simultaneous zygomatic repositioning normalizes the position and the shape of the Apert face.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, III.", "title": "" }, { "docid": "70abb68f2dca404e688127f2559e8fb1", "text": "Mental stress is a serious problem experienced by college students which often leads to a decline in academic performance, limited social life, alcoholism and drug abuse. Conventional treatments for coping with mental stress hinge on the affected individual's interaction with a psychiatrist/social support group or indulging in relaxing activities. Virtual Reality therapy (VRT) is an upcoming technique for mental stress treatment, which confines the interaction of the user with a Virtual Environment (VE). In this study, we segregate the participants into two test groups-Control and Stressed. Each participant is induced with stress by means of the Color STROOP task. Then the participant is subjected to a VR-based stress therapy that includes an Island environment, a forest environment and calming instrumental music. Effectiveness of the therapy is assessed using the participant's task performances before and after VRT along with the Positive And Negative Affect Schedule (PANAS) [1] questionnaire.", "title": "" }, { "docid": "7a1aa7db367a45ff48fb31f1c04b7fef", "text": "As the size of software systems increases, the algorithms and data structures of the computation no longer constitute the major design problems. When systems are constructed from many components, the organization of the overall system—the software architecture—presents a new set of design problems. This level of design has been addressed in a number of ways including informal diagrams and descriptive terms, module interconnection languages, templates and frameworks for systems that serve the needs of specific domains, and formal models of component integration mechanisms. In this paper we provide an introduction to the emerging field of software architecture. We begin by considering a number of common architectural styles upon which many systems are currently based and show how different styles can be combined in a single design. Then we present six case studies to illustrate how architectural representations can improve our understanding of complex software systems. Finally, we survey some of the outstanding problems in the field, and consider a few of the promising research directions.", "title": "" }, { "docid": "b46674d231758eb3273a9a7dcdf9dba7", "text": "Background The International Classification of Functioning, Disability and Health (ICF) model of the consequences of disease identifies three health outcomes, impairment, activity limitations and participation restrictions. However, few orthopaedic health outcome measures were developed with reference to the ICF. This study examined the ability of a valid and frequently used measure of upper limb function, namely the Disabilities of the Arm, Shoulder and Hand Questionnaire (DASH), to operationalise the ICF. Methods Twenty-four judges used the method of Discriminant Content Validation to allocate the 38 items of the DASH to the theoretical definition of one or more ICF outcome. Onesample t-tests classified each item as measuring, impairment, activity limitations, participation restrictions, or a combination thereof. Results The DASH contains items able to measure each of the three ICF outcomes with discriminant validity. The DASH contains five pure impairment items, 19 pure activity limitations items and three participation restriction items. In addition, seven items measured both activity limitations and participation restrictions. Conclusions The DASH can measure the three health outcomes identified by the ICF. Consequently the DASH could be used to examine the impact of trauma and subsequent interventions on each health outcome in the absence of measurement confound.", "title": "" } ]
scidocsrr
dc2881209afb605d9ffaa57080b25dfb
Learning representations of emotional speech with deep convolutional generative adversarial networks
[ { "docid": "5c788d1b3fc2f063407d5d370e7703bd", "text": "Dimensional models have been proposed in psychology studies to represent complex human emotional expressions. Activation and valence are two common dimensions in such models. They can be used to describe certain emotions. For example, anger is one type of emotion with a low valence and high activation value; neutral has both a medium level valence and activation value. In this work, we propose to apply multi-task learning to leverage activation and valence information for acoustic emotion recognition based on the deep belief network (DBN) framework. We treat the categorical emotion recognition task as the major task. For the secondary task, we leverage activation and valence labels in two different ways, category level based classification and continuous level based regression. The combination of the loss functions from the major and secondary tasks is used as the objective function in the multi-task learning framework. After iterative optimization, the values from the last hidden layer in the DBN are used as new features and fed into a support vector machine classifier for emotion recognition. Our experimental results on the Interactive Emotional Dyadic Motion Capture and Sustained Emotionally Colored Machine-Human Interaction Using Nonverbal Expression databases show significant improvements on unweighted accuracy, illustrating the benefit of utilizing additional information in a multi-task learning setup for emotion recognition.", "title": "" }, { "docid": "3f5eed1f718e568dc3ba9abbcd6bfedd", "text": "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of `context-aware' emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.", "title": "" } ]
[ { "docid": "848fbbcf6e679191fd4160db5650ef65", "text": "The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical designs of a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0 (Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.", "title": "" }, { "docid": "bdb9f3822ef89276b1aa1d493d1f9379", "text": "Individual performance is of high relevance for organizations and individuals alike. Showing high performance when accomplishing tasks results in satisfaction, feelings of selfefficacy and mastery (Bandura, 1997; Kanfer et aL, 2005). Moreover, high performing individuals get promoted, awarded and honored. Career opportunities for individuals who perform well are much better than those of moderate or low performing individuals (Van Scotter et aI., 2000). This chapter summarizes research on individual performance and addresses performance as a multi-dimensional and dynamic concept. First, we define the concept of performance, next we discuss antecedents of between-individual variation of performance, and describe intraindividual change and variability in performance, and finally, we present a research agenda for future research.", "title": "" }, { "docid": "fbc8d5de518adc5b9ed7b6bb14c7f526", "text": "Collection data structures have a major impact on the performance of applications, especially in languages such as Java, C#, or C++. This requires a developer to select an appropriate collection from a large set of possibilities, including different abstractions (e.g. list, map, set, queue), and multiple implementations. In Java, the default implementation of collections is provided by the standard Java Collection Framework (JCF). However, there exist a large variety of less known third-party collection libraries which can provide substantial performance benefits with minimal code changes.\n In this paper, we first study the popularity and usage patterns of collection implementations by mining a code corpus comprised of 10,986 Java projects. We use the results to evaluate and compare the performance of the six most popular alternative collection libraries in a large variety of scenarios. We found that for almost every scenario and JCF collection type there is an alternative implementation that greatly decreases memory consumption while offering comparable or even better execution time. Memory savings range from 60% to 88% thanks to reduced overhead and some operations execute 1.5x to 50x faster.\n We present our results as a comprehensive guideline to help developers in identifying the scenarios in which an alternative implementation can provide a substantial performance improvement. Finally, we discuss how some coding patterns result in substantial performance differences of collections.", "title": "" }, { "docid": "36e548db260b7df13d0542ff1662bb8b", "text": "Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation.\n In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queriesand 1.8% over all query traffic.", "title": "" }, { "docid": "8a6ceb55f941ab7ef33a85e454fd2248", "text": "This paper presents a model-based algorithm that estimates how the driver of a vehicle can either steer, brake, or accelerate to avoid colliding with an arbitrary object. In this algorithm, the motion of the vehicle is described by a linear bicycle model, and the perimeter of the vehicle is represented by a rectangle. The estimated perimeter of the object is described by a polygon that is allowed to change size, shape, position, and orientation at sampled time instances. Potential evasive maneuvers are modeled, parameterized, and approximated such that an analytical expression can be derived to estimate the set of maneuvers that the driver can use to avoid a collision. This set of maneuvers is then assessed to determine if the driver needs immediate assistance to avoid or mitigate an accident. The proposed threat-assessment algorithm is evaluated using authentic data from both real traffic conditions and collision situations on a test track and by using simulations with a detailed vehicle model. The evaluations show that the algorithm outperforms conventional threat-assessment algorithms at rear-end collisions in terms of the timing of autonomous brake activation. This is crucial for increasing the performance of collision-avoidance systems and for decreasing the risk of unnecessary braking. Moreover, the algorithm is computationally efficient and can be used to assist the driver in avoiding or mitigating collisions with all types of road users in all kinds of traffic scenarios.", "title": "" }, { "docid": "341e0b7d04b333376674dac3c0888f50", "text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.", "title": "" }, { "docid": "133a48a5c6c568d33734bd95d4aec0b2", "text": "The topic information of conversational content is important for continuation with communication, so topic detection and tracking is one of important research. Due to there are many topic transform occurring frequently in long time communication, and the conversation maybe have many topics, so it's important to detect different topics in conversational content. This paper detects topic information by using agglomerative clustering of utterances and Dynamic Latent Dirichlet Allocation topic model, uses proportion of verb and noun to analyze similarity between utterances and cluster all utterances in conversational content by agglomerative clustering algorithm. The topic structure of conversational content is friability, so we use speech act information and gets the hypernym information by E-HowNet that obtains robustness of word categories. Latent Dirichlet Allocation topic model is used to detect topic in file units, it just can detect only one topic if uses it in conversational content, because of there are many topics in conversational content frequently, and also uses speech act information and hypernym information to train the latent Dirichlet allocation models, then uses trained models to detect different topic information in conversational content. For evaluating the proposed method, support vector machine is developed for comparison. According to the experimental results, we can find the proposed method outperforms the approach based on support vector machine in topic detection and tracking in spoken dialogue.", "title": "" }, { "docid": "3ecf50ed3a3bc3fc6d9d1beb88ea4136", "text": "Theoretical understanding of deep learning is one of the most important tasks facing the statistics and machine learning communities. While multilayer, or deep, neural networks (DNNs) originated as models of biological networks in neuroscience [1, 2, 3, 4] and psychology [5, 6], and as engineering methods [7, 8], they have become a centerpiece of the machine learning (ML) toolbox. In ML, DNNs are simultaneously one of the simplest and most complex methods. They consist of many interconnected nodes that are grouped into layers (see Figure 1a), whose operations are stunningly simple; the nth node of the network at a given layer i, xi(n) is simply a nonlinear function f(·) (e.g. saturating nonlinearity) applied to an affine function of the previous layer", "title": "" }, { "docid": "314e633721d6519075c55b4ceec744c9", "text": "To overcome the slow kinetics of the volume phase transition of stimuli-responsive hydrogels as platforms for soft actuators, thermally responsive comb-type hydrogels were prepared using synthesized poly(N-isopropylacrylamide) macromonomers bearing graft chains. Fast responding light-responsive hydrogels were fabricated by combining a comb-type hydrogel matrix with photothermal magnetite nanoparticles (MNP). The MNPs dispersed in the matrix provide heat to stimulate the volume change of the hydrogel matrix by converting absorbed visible light to thermal energy. In this process, the comb-type hydrogel matrix exhibited a rapid response due to the free, mobile grafted chains. The comb-type hydrogel exhibited significantly enhanced light-induced volume shrinkage and rapid recovery. The comb-type hydrogels containing MNP were successfully used to fabricate a bilayer-type photo-actuator with fast bending motion.", "title": "" }, { "docid": "080f76412f283fb236c28678bf9dada8", "text": "We describe a new algorithm for robot localization, efficient both in terms of memory and processing time. It transforms a stream of laser range sensor data into a probabilistic calculation of the robot’s position, using a bidirectional Long Short-Term Memory (LSTM) recurrent neural network (RNN) to learn the structure of the environment and to answer queries such as: in which room is the robot? To achieve this, the RNN builds an implicit map of the environment.", "title": "" }, { "docid": "f0efa93a150ca1be1351277ea30e370b", "text": "We describe an effort to train a RoboCup soccer-playing agent playing in the Simulation League using casebased reasoning. The agent learns (builds a case base) by observing the behaviour of existing players and determining the spatial configuration of the objects the existing players pay attention to. The agent can then use the case base to determine what actions it should perform given similar spatial configurations. When observing a simple goal-driven, rule-based, stateless agent, the trained player appears to imitate the behaviour of the original and experimental results confirm the observed behaviour. The process requires little human intervention and can be used to train agents exhibiting diverse behaviour in an automated manner.", "title": "" }, { "docid": "176386fd6f456d818d7ebf81f65d5030", "text": "Event-driven architecture is gaining momentum in research and application areas as it promises enhanced responsiveness and asynchronous communication. The combination of event-driven and service-oriented architectural paradigms and web service technologies provide a viable possibility to achieve these promises. This paper outlines an architectural design and accompanying implementation technologies for its realization as a web services-based event-driven SOA.", "title": "" }, { "docid": "167703a2adda8ec3ab7d463ab5693f77", "text": "Conventional DEA models assume deterministic and precise data for input and output observations in a static situation, and their DMUs are often ranked incompletely. To work with interval data, DMUS' complete ranking as well as dynamic assessment issues synchronously, we put forward a hybrid model for evaluating the relative efficiencies of a set of DMUs over an observed time period with consideration of interval DEA, super-efficiency DEA and dynamic DEA. However, few researchers, if any, considered this issue within the combination of these three models. The hybrid model proposed in this paper enables us to (i) take interval data in input and output into account, (ii) rank DEA efficient DMUs completely, (iii) obtain the overall dynamic efficiency of DMUs over the entire observed period. We finally illustrate the calculation procedure of the proposed approach by a numerical example.", "title": "" }, { "docid": "f4616ce19907f8502fb7520da68c6852", "text": "Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2, 11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers.", "title": "" }, { "docid": "d6c3896357022a27513f63a5e3f8b4d3", "text": "The aging of the world's population presents vast societal and individual challenges. The relatively shrinking workforce to support the growing population of the elderly leads to a rapidly increasing amount of technological innovations in the field of elderly care. In this paper, we present an integrated framework consisting of various intelligent agents with their own expertise and responsibilities working in a holistic manner to assist, care, and accompany the elderly around the clock in the home environment. To support the independence of the elderly for Aging-In-Place (AIP), the intelligent agents must well understand the elderly, be fully aware of the home environment, possess high-level reasoning and learning capabilities, and provide appropriate tender care in the physical, cognitive, emotional, and social aspects. The intelligent agents sense in non-intrusive ways from different sources and provide wellness monitoring, recommendations, and services across diverse platforms and locations. They collaborate together and interact with the elderly in a natural and holistic manner to provide all-around tender care reactively and proactively. We present our implementation of the collaboration framework with a number of realized functionalities of the intelligent agents, highlighting its feasibility and importance in addressing various challenges in AIP.", "title": "" }, { "docid": "f5e4bf1536d2ef7065b77be4e0c37ddc", "text": "This research addresses management control in the front end of innovation projects. We conceptualize and analyze PMOs more broadly than just as a specialized project-focused organizational unit. Building on theories of management control, organization design, and innovation front end literature, we assess the role of PMO as an integrative arrangement. The empirical material is derived from four companies. The results show a variety of management control mechanisms that can be considered as integrative organizational arrangements. Such organizational arrangements can be considered as an alternative to a non-existent PMO, or to complement a (non-existent) PMO's tasks. The paper also contrasts prior literature by emphasizing the desirability of a highly organic or embedded matrix structure in the organization. Finally, we propose that the development path of the management approach proceeds by first emphasizing diagnostic and boundary systems (with mechanistic management approaches) followed by intensive use of interactive and belief systems (with value-based management approaches). The major contribution of this paper is in the organizational and managerial mechanisms of a firm that is managing multiple innovation projects. This research also expands upon the existing PMO research to include a broader management control approach for managing projects in companies. © 2011 Elsevier Ltd. and IPMA. All rights reserved.", "title": "" }, { "docid": "0ba388167309c8821f0d6a1e9569f1eb", "text": "With advancement in science and technology, computing systems are becoming increasingly more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming increasingly more difficult to monitor, manage and maintain. Traditional approaches to system management have been largely based on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. This has been well known and experienced as a cumber-some, labor intensive, and error prone process. In addition, this process is difficult to keep up with the rapidly changing environments. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing systems.A popular approach to system management is based on analyzing system log files. However, some new aspects of the log files have been less emphasized in existing methods from data mining and machine learning community. The various formats and relatively short text messages of log files, and temporal characteristics in data representation pose new challenges. In this paper, we will describe our research efforts on mining system log files for automatic management. In particular, we apply text mining techniques to categorize messages in log files into common situations, improve categorization accuracy by considering the temporal characteristics of log messages, and utilize visualization tools to evaluate and validate the interesting temporal patterns for system management.", "title": "" }, { "docid": "cc10178729ca27c413223472f1aa08be", "text": "The automatic classification of ships from aerial images is a considerable challenge. Previous works have usually applied image processing and computer vision techniques to extract meaningful features from visible spectrum images in order to use them as the input for traditional supervised classifiers. We present a method for determining if an aerial image of visible spectrum contains a ship or not. The proposed architecture is based on Convolutional Neural Networks (CNN), and it combines neural codes extracted from a CNN with a k-Nearest Neighbor method so as to improve performance. The kNN results are compared to those obtained with the CNN Softmax output. Several CNN models have been configured and evaluated in order to seek the best hyperparameters, and the most suitable setting for this task was found by using transfer learning at different levels. A new dataset (named MASATI) composed of aerial imagery with more than 6000 samples has also been created to train and evaluate our architecture. The experimentation shows a success rate of over 99% for our approach, in contrast with the 79% obtained with traditional methods in classification of ship images, also outperforming other methods based on CNNs. A dataset of images (MWPU VHR-10) used in previous works was additionally used to evaluate the proposed approach. Our best setup achieves a success ratio of 86% with these data, significantly outperforming previous state-of-the-art ship classification methods.", "title": "" }, { "docid": "8296ce0143992c7513051c70758541be", "text": "This artic,le introduces Adaptive Resonance Theor) 2-A (ART 2-A), an efjCicient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architect~~rc, hut at a speed two to three orders of magnitude fbster. Analysis and simulations show how’ the ART 2-A systems correspond to ART 2 rivnamics at both the fast-learn limit and at intermediate learning rate.r. Intermediate ieurning rates permit fust commitment of category nodes hut slow recoding, analogous to properties of word frequency effects. encoding specificity ef@cts, and episodic memory. Better noise tolerunce is hereby achieved ti’ithout a loss of leurning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes pructical the use of ART 2 modules in large scale neural computation. Keywords-Neural networks, Pattern recognition. Category formation. Fast learning, Adaptive resonance.", "title": "" }, { "docid": "089573eaa8c1ad8c7ad244a8ccca4049", "text": "We consider the problem of assigning an input vector to one of m classes by predicting P(c|x) for c = 1, o, m. For a twoclass problem, the probability of class one given x is estimated by s(y(x)), where s(y) = 1/(1 + ey ). A Gaussian process prior is placed on y(x), and is combined with the training data to obtain predictions for new x points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior; the necessary integration over y is carried out using Laplace’s approximation. The method is generalized to multiclass problems (m > 2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.", "title": "" } ]
scidocsrr
1c59aac3510e890a4f421d6950445498
A deep learning approach for detecting malicious JavaScript code
[ { "docid": "e5104baa94ee849d3544c865443a2223", "text": "Modern attacks are being made against client side applications, such as web browsers, which most users use to surf and communicate on the internet. Client honeypots visit and interact with suspect web sites in order to detect and collect information about malware to protect users from malicious websites or to allow security professionals to investigate malicious content. This paper will present the idea of using web-based technology and integrating it with a client honeypot by building a low interaction client honeypot tool called Honeyware. It describes the benefits of Honeyware as well as the challenges of a low interaction client honeypot and provides some ideas for how these challenges could be overcome.", "title": "" }, { "docid": "c86e4bf0577f49d6d4384379651c7d9a", "text": "The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.", "title": "" } ]
[ { "docid": "45082917d218ec53559c328dcc7c02db", "text": "How are people able to think about things they have never seen or touched? We demonstrate that abstract knowledge can be built analogically from more experience-based knowledge. People's understanding of the abstract domain of time, for example, is so intimately dependent on the more experience-based domain of space that when people make an air journey or wait in a lunch line, they also unwittingly (and dramatically) change their thinking about time. Further, our results suggest that it is not sensorimotor spatial experience per se that influences people's thinking about time, but rather people's representations of and thinking about their spatial experience.", "title": "" }, { "docid": "7d23a8326c6e15ca2748e28294797865", "text": "Process management is one of the important tasks performed by the operating system. The performance of the system depends on the CPU scheduling algorithms. The main aim of the CPU scheduling algorithms is to minimize waiting time, turnaround time, response time and context switching and maximizing CPU utilization. First-Come-First-Served (FCFS) Round Robin (RR), Shortest Job First (SJF) and, Priority Scheduling are some popular CPU scheduling algorithms. In time shared systems, Round Robin CPU scheduling is the preferred choice. In Round Robin CPU scheduling, performance of the system depends on the choice of the optimal time quantum. This paper presents an improved Round Robin CPU scheduling algorithm coined enhancing CPU performance using the features of Shortest Job First and Round Robin scheduling with varying time quantum. The proposed algorithm is experimentally proven better than conventional RR. The simulation results show that the waiting time and turnaround time have been reduced in the proposed algorithm compared to traditional RR.", "title": "" }, { "docid": "f437f971d7d553b69d438a469fd26d41", "text": "This paper introduces a single-chip, 200 200element sensor array implemented in a standard two-metal digital CMOS technology. The sensor is able to grab the fingerprint pattern without any use of optical and mechanical adaptors. Using this integrated sensor, the fingerprint is captured at a rate of 10 F/s by pressing the finger skin onto the chip surface. The fingerprint pattern is sampled by capacitive sensors that detect the electric field variation induced by the skin surface. Several design issues regarding the capacitive sensing problem are reported and the feedback capacitive sensing scheme (FCS) is introduced. More specifically, the problem of the charge injection in MOS switches has been revisited for charge amplifier design.", "title": "" }, { "docid": "a9664b0b87fabeb87478433b6c8c513b", "text": "Rendering realistic organic materials is a challenging issue. The human eye is an important part of nonverbal communication which, consequently, requires specific modeling and rendering techniques to enhance the realism of virtual characters. We propose an image-based method for estimating both iris morphology and scattering features in order to generate convincing images of virtual eyes. In this regard, we develop a technique to unrefract iris photographs. We model the morphology of the human iris as an irregular multilayered tissue. We then approximate the scattering features of the captured iris. Finally, we propose a real-time rendering technique based on the subsurface texture mapping representation and introduce a precomputed refraction function as well as a caustic function, which accounts for the light interactions at the corneal interface.", "title": "" }, { "docid": "89c7754a85459768c7aa53309821c58e", "text": "Recent developments in cryptography and, in particular in Fully Homomorphic Encryption (FHE), have allowed for the development of new privacy preserving machine learning schemes. In this paper, we show how these schemes can be applied to the automatic assessment of speech affected by medical conditions, allowing for patient privacy in diagnosis and monitoring scenarios. More specifically, we present results for the assessment of the degree of Parkinsons Disease, the detection of a Cold, and both the detection and assessment of the degree of Depression. To this end, we use a neural network in which all operations are performed in an FHE context. This implies replacing the activation functions by linear and second degree polynomials, as only additions and multiplications are viable. Furthermore, to guarantee that the inputs of these activation functions fall within the convergence interval of the approximation, a batch normalization layer is introduced before each activation function. After training the network with unencrypted data, the resulting model is then employed in an encrypted version of the network, to produce encrypted predictions. Our tests show that the use of this framework yields results with little to no performance degradation, in comparison to the baselines produced for the same datasets.", "title": "" }, { "docid": "e70c4ad755edef1fbea472e029bd7e22", "text": "This narrative review examines assessments of the reliability of online health information retrieved through social media to ascertain whether health information accessed or disseminated through social media should be evaluated differently than other online health information. Several medical, library and information science, and interdisciplinary databases were searched using terms relating to social media, reliability, and health information. While social media's increasing role in health information consumption is recognized, studies are dominated by investigations of traditional (i.e., non-social media) sites. To more richly assess constructions of reliability when using social media for health information, future research must focus on health consumers' unique contexts, virtual relationships, and degrees of trust within their social networks.", "title": "" }, { "docid": "bcb71f55375c1948283281d60ace5549", "text": "This paper proposes a novel approach named AGM to e ciently mine the association rules among the frequently appearing substructures in a given graph data set. A graph transaction is represented by an adjacency matrix, and the frequent patterns appearing in the matrices are mined through the extended algorithm of the basket analysis. Its performance has been evaluated for the arti cial simulation data and the carcinogenesis data of Oxford University and NTP. Its high e ciency has been con rmed for the size of a real-world problem. . . .", "title": "" }, { "docid": "fcb9614925e939898af060b9ee52f357", "text": "The authors present a method for constructing a feedforward neural net implementing an arbitrarily good approximation to any L/sub 2/ function over (-1, 1)/sup n/. The net uses n input nodes, a single hidden layer whose width is determined by the function to be implemented and the allowable mean square error, and a linear output neuron. Error bounds and an example are given for the method.<<ETX>>", "title": "" }, { "docid": "c0e5dfd33b2cb87f91c58d47286fde40", "text": "Recently, a variety of representation learning approaches have been developed in the literature to induce latent generalizable features across two domains. In this paper, we extend the standard hidden Markov models (HMMs) to learn distributed state representations to improve cross-domain prediction performance. We reformulate the HMMs by mapping each discrete hidden state to a distributed representation vector and employ an expectationmaximization algorithm to jointly learn distributed state representations and model parameters. We empirically investigate the proposed model on cross-domain part-ofspeech tagging and noun-phrase chunking tasks. The experimental results demonstrate the effectiveness of the distributed HMMs on facilitating domain adaptation.", "title": "" }, { "docid": "728ea68ac1a50ae2d1b280b40c480aec", "text": "This paper presents a new metaprogramming library, CL ARRAY, that offers multiplatform and generic multidimensional data containers for C++ specifically adapted for parallel programming. The CL ARRAY containers are built around a new formalism for representing the multidimensional nature of data as well as the semantics of multidimensional pointers and contiguous data structures. We also present OCL ARRAY VIEW, a concept based on metaprogrammed enveloped objects that supports multidimensional transformations and multidimensional iterators designed to simplify and formalize the interfacing process between OpenCL APIs, standard template library (STL) algorithms and CL ARRAY containers. Our results demonstrate improved performance and energy savings over the three most popular container libraries available to the developer community for use in the context of multi-linear algebraic applications.", "title": "" }, { "docid": "ed0f4616a36a2dffb6120bccd7539d0c", "text": "Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to \"model-free\" and \"model-based\" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.", "title": "" }, { "docid": "f631cca2bd0c22f60af1d5f63a7522b5", "text": "We introduce the problem of k-pattern set mining, concerned with finding a set of k related patterns under constraints. This contrasts to regular pattern mining, where one searches for many individual patterns. The k-pattern set mining problem is a very general problem that can be instantiated to a wide variety of well-known mining tasks including concept-learning, rule-learning, redescription mining, conceptual clustering and tiling. To this end, we formulate a large number of constraints for use in k-pattern set mining, both at the local level, that is, on individual patterns, and on the global level, that is, on the overall pattern set. Building general solvers for the pattern set mining problem remains a challenge. Here, we investigate to what extent constraint programming (CP) can be used as a general solution strategy. We present a mapping of pattern set constraints to constraints currently available in CP. This allows us to investigate a large number of settings within a unified framework and to gain insight in the possibilities and limitations of these solvers. This is important as it allows us to create guidelines in how to model new problems successfully and how to model existing problems more efficiently. It also opens up the way for other solver technologies.", "title": "" }, { "docid": "e83c2fb41895329f8ce6d57ec6908f6a", "text": "This paper studies how to conduct efficiency assessment using data envelopment analysis (DEA) in interval and/or fuzzy input–output environments. A new pair of interval DEA models is constructed on the basis of interval arithmetic, which differs from the existing DEA models handling interval data in that the former is a linear CCR model without the need of extra variable alternations and uses a fixed and unified production frontier (i.e. the same constraint set) to measure the efficiencies of decision-making units (DMUs) with interval input and output data, while the latter is usually a nonlinear optimization problem with the need of extra variable alternations or scale transformations and utilizes variable production frontiers (i.e. different constraint sets) to measure interval efficiencies. Ordinal preference information and fuzzy data are converted into interval data through the estimation of permissible intervals and -levelsets, respectively, and are incorporated into the interval DEA models. The proposed interval DEA models are developed for measuring the lower and upper bounds of the best relative efficiency of each DMU with interval input and output data, which are different from the interval formed by the worst and the best relative efficiencies of each DMU. A minimax regret-based approach (MRA) is introduced to compare and rank This research was supported by the project on Human Social Science of MOE, P.R.China under the Grant No. 01JA790082, the National Natural Science Foundation of China (NSFC) under the Grant No: 70271056, and also in part by the European Commission under the Grant No: IPS-2000-00030, the UK Engineering and Physical Science Research Council (EPSRC) under the Grant No: GR/R32413/01, and Fok Ying Tung Education Foundation under the Grant No: 71080. ∗ Corresponding author. Manchester Business School East, The University of Manchester, P.O. Box 88, Manchester M60 1QD, UK. Tel.: +44 161 2750788; fax: +44 161 2003505. E-mail addresses: msymwang@hotmail.com , Yingming.Wang@Manchester.ac.uk (Y.-M. Wang). 0165-0114/$ see front matter © 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.fss.2004.12.011 348 Y.-M. Wang et al. / Fuzzy Sets and Systems 153 (2005) 347–370 the efficiency intervals of DMUs. Two numerical examples are provided to show the applications of the proposed interval DEA models and the preference ranking approach. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c4171bd7b870d26e0b2520fc262e7c88", "text": "Each year, the treatment decisions for more than 230, 000 breast cancer patients in the U.S. hinge on whether the cancer has metastasized away from the breast. Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labor intensive and error-prone. We present a framework to automatically detect and localize tumors as small as 100×100 pixels in gigapixel microscopy images sized 100, 000×100, 000 pixels. Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumor detection task. At 8 false positives per image, we detect 92.4% of the tumors, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity. We achieve image-level AUC scores above 97% on both the Camelyon16 test set and an independent set of 110 slides. In addition, we discover that two slides in the Camelyon16 training set were erroneously labeled normal. Our approach could considerably reduce false negative rates in metastasis detection.", "title": "" }, { "docid": "9d175a211ec3b0ee7db667d39c240e1c", "text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.", "title": "" }, { "docid": "f967ad72daeb84e2fce38aec69997c8a", "text": "While HCI has focused on multitasking with information workers, we report on multitasking among Millennials who grew up with digital media - focusing on college students. We logged computer activity and used biosensors to measure stress of 48 students for 7 days for all waking hours, in their in situ environments. We found a significant positive relationship with stress and daily time spent on computers. Stress is positively associated with the amount of multitasking. Conversely, stress is negatively associated with Facebook and social media use. Heavy multitaskers use significantly more social media and report lower positive affect than light multitaskers. Night habits affect multitasking the following day: late-nighters show longer duration of computer use and those ending their activities earlier in the day multitask less. Our study shows that college students multitask at double the frequency compared to studies of information workers. These results can inform designs for stress management of college students.", "title": "" }, { "docid": "d4d52c325a33710cfa59a2067dbc553c", "text": "This paper presents an SDR (Software-Defined Radio) implementation of an FMCW (Frequency-Modulated Continuous-Wave) radar using a USRP (Universal Software Radio Peripheral) device. The tools used in the project and the architecture of implementation with FPGA real-time processing and PC off-line processing are covered. This article shows the detailed implementation of an FMCW radar using a USRP device with no external analog devices except for one amplifier and two antennas. The FMCW radar demonstrator presented in the paper has been tested in the laboratory as well as in the real environment, where the ability to detect targets such as cars moving on the roads has been successfully shown.", "title": "" }, { "docid": "05a35ab061a0d5ce18a3ceea8dde78f6", "text": "A single feed grid array antenna for 24 GHz Doppler sensor is proposed in this paper. It is designed on 0.787 mm thick substrate made of Rogers Duroid 5880 (ε<sub>r</sub>= 2.2 and tan δ= 0.0009) with 0.017 mm copper claddings. Dimension of the antenna is 60 mm × 60 mm × 0.787 mm. This antenna exhibits 2.08% impedance bandwidth, 6.25% radiation bandwidth and 20.6 dBi gain at 24.2 GHz. The beamwidth is 14°and 16°in yoz and xoz planes, respectively.", "title": "" }, { "docid": "59119faf4281b933999c62f4d5099495", "text": "In conventional wireless networks, security issues are primarily considered above the physical layer and are usually based on bit-level algorithms to establish the identity of a legitimate wireless device. Physical layer security is a new paradigm in which features extracted from an analog signal can be used to establish the unique identity of a transmitter. Our previous research work into RF fingerprinting has shown that every transmitter has a unique RF fingerprint owing to imperfections in the analog components present in the RF front end. Generally, it is believed that the RF fingerprint of a specific transmitter is same across all receivers. That is, a fingerprint created in one receiver can be transported to another receiver to establish the identity of a transmitter. However, to the best of the author's knowledge, no such example is available in the literature in which an RF fingerprint generated in one receiver is used for identification in other receivers. This paper presents the results of experiments, and analyzing the feasibility of using an universal RF fingerprint of a transmitter for identification across different receivers.", "title": "" }, { "docid": "4b2e6f5a0ce30428377df72d8350d637", "text": "Sentence matching is widely used in various natural language tasks such as natural language inference, paraphrase identification, and question answering. For these tasks, understanding logical and semantic relationship between two sentences is required but it is yet challenging. Although attention mechanism is useful to capture the semantic relationship and to properly align the elements of two sentences, previous methods of attention mechanism simply use a summation operation which does not retain original features enough. Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and the co-attentive feature information from the bottommost word embedding layer to the uppermost recurrent layer. To alleviate the problem of an ever-increasing size of feature vectors due to dense concatenation operations, we also propose to use an autoencoder after dense concatenation. We evaluate our proposed architecture on highly competitive benchmark datasets related to sentence matching. Experimental results show that our architecture, which retains recurrent and attentive features, achieves state-of-the-art performances for most of the tasks.", "title": "" } ]
scidocsrr
cfb07eb9f9e2addbbf9fc41af4bc9184
Towards Privacy Protection in Smart Grid
[ { "docid": "70374d2cbf730fab13c3e126359b59e8", "text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.", "title": "" } ]
[ { "docid": "4ab644ac13d8753aa6e747c4070e95e9", "text": "This paper presents a framework for modeling the phase noise in complementary metal–oxide–semiconductor (CMOS) ring oscillators. The analysis considers both linear and nonlinear operations, and it includes both device noise and digital switching noise coupled through the power supply and substrate. In this paper, we show that fast rail-to-rail switching is required in order to achieve low phase noise. Further, flicker noise from the bias circuit can potentially dominate the phase noise at low offset frequencies. We define the effective factor for ring oscillators with large and nonlinear voltage swings and predict its increase for CMOS processes with smaller feature sizes. Our phase-noise analysis is validated via simulation and measurement results for ring oscillators fabricated in a number of CMOS processes.", "title": "" }, { "docid": "bdd25661527218d1829c0cbe1dafee4f", "text": "Nasal reconstruction continues to be a formidable challenge for most plastic surgeons. This article provides an overview of nasal reconstruction with brief descriptions of subtle nuances involving certain techniques that the authors believe help their overall outcomes. The major aspects of nasal reconstruction are included: lining, support, skin coverage, local nasal flaps, nasolabial flap, and paramedian forehead flap. The controversy of the subunit reconstruction versus defect-only reconstruction is briefly discussed. The authors believe that strictly adhering to one principle or another limits one's options, and the patient will benefit more if one is able to apply a variety of options for each individualized defect. A different approach to full-thickness skin grafting is also briefly discussed as the authors propose its utility in lower third reconstruction. In general, the surgeon should approach each patient as a distinct individual with a unique defect and thus tailor each reconstruction to fit the patient's needs and expectations. Postoperative care, including dermabrasion, skin care, and counseling, cannot be understated.", "title": "" }, { "docid": "8fe6559fb71a7267f9e91e1db08774dd", "text": "Software development can be challenging because of the large information spaces that developers must navigate. Without assistance, developers can become bogged down and spend a disproportionate amount of their time seeking information at the expense of other value-producing tasks. Recommendation systems for software engineering (RSSEs) are software tools that can assist developers with a wide range of activities, from reusing code to writing effective bug reports. The authors provide an overview of recommendation systems for software engineering: what they are, what they can do for developers, and what they might do in the future.", "title": "" }, { "docid": "66382b88e0faa573251d5039ccd65d6c", "text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.", "title": "" }, { "docid": "3fcb9ab92334e3e214a7db08a93d5acd", "text": "BACKGROUND\nA growing body of literature indicates that physical activity can have beneficial effects on mental health. However, previous research has mainly focussed on clinical populations, and little is known about the psychological effects of physical activity in those without clinically defined disorders.\n\n\nAIMS\nThe present study investigates the association between physical activity and mental health in an undergraduate university population based in the United Kingdom.\n\n\nMETHOD\nOne hundred students completed questionnaires measuring their levels of anxiety and depression using the Hospital Anxiety and Depression Scale (HADS) and their physical activity regime using the Physical Activity Questionnaire (PAQ).\n\n\nRESULTS\nSignificant differences were observed between the low, medium and high exercise groups on the mental health scales, indicating better mental health for those who engage in more exercise.\n\n\nCONCLUSIONS\nEngagement in physical activity can be an important contributory factor in the mental health of undergraduate students.", "title": "" }, { "docid": "3090b9b0017454dc0f0c4549a56d8407", "text": "Light-field cameras have become widely available in both consumer and industrial applications. However, most previous approaches do not model occlusions explicitly, and therefore fail to capture sharp object boundaries. A common assumption is that for a Lambertian scene, a pixel will exhibit photo-consistency, which means all viewpoints converge to a single point when focused to its depth. However, in the presence of occlusions this assumption fails to hold, making most current approaches unreliable precisely where accurate depth information is most important - at depth discontinuities. In this paper, an occlusion-aware depth estimation algorithm is developed; the method also enables identification of occlusion edges, which may be useful in other applications. It can be shown that although photo-consistency is not preserved for pixels at occlusions, it still holds in approximately half the viewpoints. Moreover, the line separating the two view regions (occluded object versus occluder) has the same orientation as that of the occlusion edge in the spatial domain. By ensuring photo-consistency in only the occluded view region, depth estimation can be improved. Occlusion predictions can also be computed and used for regularization. Experimental results show that our method outperforms current state-of-the-art light-field depth estimation algorithms, especially near occlusion boundaries.", "title": "" }, { "docid": "cadd42c1fcad9f8a615635817eaebf5a", "text": "Case studies of landslide tsunamis require integration of marine geology data and interpretations into numerical simulations of tsunami attack. Many landslide tsunami generation and propagation models have been proposed in recent time, further motivated by the 1998 Papua New Guinea event. However, few of these models have proven capable of integrating the best available marine geology data and interpretations into successful case studies that reproduce all available tsunami observations and records. We show that nonlinear and dispersive tsunami propagation models may be necessary for many landslide tsunami case studies. GEOWAVE is a comprehensive tsunami simulation model formed in part by combining the Tsunami Open and Progressive Initial Conditions System (TOPICS) with the fully nonlinear Boussinesq water wave model FUNWAVE. TOPICS uses curve fits of numerical results from a fully nonlinear potential flow model to provide approximate landslide tsunami sources for tsunami propagation models, based on marine geology data and interpretations. In this work, we validate GEOWAVE with successful case studies of the 1946 Unimak, Alaska, the 1994 Skagway, Alaska, and the 1998 Papua New Guinea events. GEOWAVE simulates accurate runup and inundation at the same time, with no additional user interference or effort, using a slot technique. Wave breaking, if it occurs during shoaling or runup, is also accounted for with a dissipative breaking model acting on the wave front. The success of our case studies depends on the combination of accurate tsunami sources and an advanced tsunami propagation and inundation model.", "title": "" }, { "docid": "c57fa27a4745e3a5440bd7209cf109a2", "text": "OBJECTIVES\nWe sought to use natural language processing to develop a suite of language models to capture key symptoms of severe mental illness (SMI) from clinical text, to facilitate the secondary use of mental healthcare data in research.\n\n\nDESIGN\nDevelopment and validation of information extraction applications for ascertaining symptoms of SMI in routine mental health records using the Clinical Record Interactive Search (CRIS) data resource; description of their distribution in a corpus of discharge summaries.\n\n\nSETTING\nElectronic records from a large mental healthcare provider serving a geographic catchment of 1.2 million residents in four boroughs of south London, UK.\n\n\nPARTICIPANTS\nThe distribution of derived symptoms was described in 23 128 discharge summaries from 7962 patients who had received an SMI diagnosis, and 13 496 discharge summaries from 7575 patients who had received a non-SMI diagnosis.\n\n\nOUTCOME MEASURES\nFifty SMI symptoms were identified by a team of psychiatrists for extraction based on salience and linguistic consistency in records, broadly categorised under positive, negative, disorganisation, manic and catatonic subgroups. Text models for each symptom were generated using the TextHunter tool and the CRIS database.\n\n\nRESULTS\nWe extracted data for 46 symptoms with a median F1 score of 0.88. Four symptom models performed poorly and were excluded. From the corpus of discharge summaries, it was possible to extract symptomatology in 87% of patients with SMI and 60% of patients with non-SMI diagnosis.\n\n\nCONCLUSIONS\nThis work demonstrates the possibility of automatically extracting a broad range of SMI symptoms from English text discharge summaries for patients with an SMI diagnosis. Descriptive data also indicated that most symptoms cut across diagnoses, rather than being restricted to particular groups.", "title": "" }, { "docid": "471eca6664d0ae8f6cdfb848bc910592", "text": "Taxonomic relation identification aims to recognize the ‘is-a’ relation between two terms. Previous works on identifying taxonomic relations are mostly based on statistical and linguistic approaches, but the accuracy of these approaches is far from satisfactory. In this paper, we propose a novel supervised learning approach for identifying taxonomic relations using term embeddings. For this purpose, we first design a dynamic weighting neural network to learn term embeddings based on not only the hypernym and hyponym terms, but also the contextual information between them. We then apply such embeddings as features to identify taxonomic relations using a supervised method. The experimental results show that our proposed approach significantly outperforms other state-of-the-art methods by 9% to 13% in terms of accuracy for both general and specific domain datasets.", "title": "" }, { "docid": "aee91ee5d4cbf51d9ce1344be4e5448c", "text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.", "title": "" }, { "docid": "52ce8c1259050f403723ec38782898f1", "text": "Indian population is growing very fast and is responsible for posing various environmental risks like traffic noise which is the primitive contributor to the overall noise pollution in urban environment. So, an attempt has been made to develop a web enabled application for spatio-temporal semantic analysis of traffic noise of one of the urban road segments in India. Initially, a traffic noise model was proposed for the study area based on the Calixto model. Later, a City Geographic Markup Language (CityGML) model, which is an OGC encoding standard for 3D data representation, was developed and stored into PostGIS. A web GIS framework was implemented for simulation of traffic noise level mapped on building walls using the data from PostGIS. Finally, spatio-temporal semantic analysis to quantify the effects in terms of threshold noise level, number of walls and roofs affected from start to the end of the day, was performed.", "title": "" }, { "docid": "ddd3f4e9bf77a65c7b183d04905e1b68", "text": "The immune system is built to defend an organism against both known and new attacks, and functions as an adaptive distributed defense system. Artificial Immune Systems abstract the structure of immune systems to incorporate memory, fault detection and adaptive learning. We propose an immune system based real time intrusion detection system using unsupervised clustering. The model consists of two layers: a probabilistic model based T-cell algorithm which identifies possible attacks, and a decision tree based B-cell model which uses the output from T-cells together with feature information to confirm true attacks. The algorithm is tested on the KDD 99 data, where it achieves a low false alarm rate while maintaining a high detection rate. This is true even in case of novel attacks,which is a significant improvement over other algorithms.", "title": "" }, { "docid": "91ed0637e0533801be8b03d5ad21d586", "text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.", "title": "" }, { "docid": "db2b94a49d4907504cf2444305287ec8", "text": "In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TDGAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination, expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversarial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.", "title": "" }, { "docid": "38d7565371e8faede8ed06e6623bb40a", "text": "Exposing the processing history of a digital image is an important problem for forensic analyzers and steganalyzers. As the median filter is a popular nonlinear denoising operator, the blind forensics of median filtering is particularly interesting. This paper proposes a novel approach for detecting median filtering in digital images, which can 1) accurately detect median filtering in arbitrary images, even reliably detect median filtering in low-resolution and JPEG compressed images; and 2) reliably detect tampering when part of a median-filtered image is inserted into a nonmedian-filtered image, or vice versa. The effectiveness of the proposed approach is exhaustively evaluated in five different image databases.", "title": "" }, { "docid": "5c58eb86ec2fb61a4c26446a41a9037a", "text": "The filter bank methods have been a popular non-parametric way of computing the complex amplitude spectrum. So far, the length of the filters in these filter banks has been set to some constant value independently of the data. In this paper, we take the first step towards considering the filter length as an unknown parameter. Specifically, we derive a very simple and approximate way of determining the optimal filter length in a data-adaptive way. Based on this analysis, we also derive a model averaged version of the forward and the forward-backward amplitude spectral Capon estimators. Through simulations, we show that these estimators significantly improve the estimation accuracy compared to the traditional Capon estimators.", "title": "" }, { "docid": "f2cc1c45ecf32015eb6f0842badafd7c", "text": "Firms are facing more difficulties with the implementation of strategies than with its formulation. Therefore, this paper examines the linkage between business strategy, project portfolio management, and business success to close the gap between strategy formulation and implementation. Earlier research has found some supporting evidence of a positive relationship between isolated concepts, but so far there is no coherent and integral framework covering the whole cycle from strategy to success. Therefore, the existing research on project portfolio management is extended by the concept of strategic orientation. Based on a literature review, a comprehensive conceptual model considering strategic orientation, project portfolio structuring, project portfolio success, and business success is developed. This model can be used for future empirical research on the influence of strategy on project portfolio management and its success. Furthermore, it can easily be extended e.g. by contextual factors. © 2010 Elsevier Ltd. and IPMA. All rights reserved.", "title": "" }, { "docid": "c24156b6c9b8f5c04fe40e1c6814d115", "text": "This paper presents a compact SIW (substrate integrated waveguide) 3×3 Butler matrix (BM) for 5G mobile applications. The detailed structuring procedures, parameter determinations of each involved component are provided. To validate the 3×3 BM, a slot array is designed. The cascading simulations and prototype measurements are also carried out. The overall performance and dimension show that it can be used for 5G mobile devices. The measured S-parameters agree well with the simulated ones. The measured gains are in the range of 8.1 dBi ∼ 11.1 dBi, 7.1 dBi ∼ 9.8 dBi and 8.9 dBi ∼ 11 dBi for port 1∼3 excitations.", "title": "" }, { "docid": "f38854d7c788815d8bc6d20db284e238", "text": "This paper presents the development of a Sinhala Speech Recognition System to be deployed in an Interactive Voice Response (IVR) system of a telecommunication service provider. The main objectives are to recognize Sinhala digits and names of Sinhala songs to be set up as ringback tones. Sinhala being a phonetic language, its features are studied to develop a list of 47 phonemes. A continuous speech recognition system is developed based on Hidden Markov Model (HMM). The acoustic model is trained using the voice through mobile phone. The outcome is a speaker independent speech recognition system which is capable of recognizing 10 digits and 50 Sinhala songs. A word error rate (WER) of 11.2% using a speech corpus of 0.862 hours and a sentence error rate (SER) of 5.7% using a speech corpus of 1.388 hours are achieved for digits and songs respectively.", "title": "" } ]
scidocsrr
eec8b15a53c9494891ee5be7e31d40b7
A Study of RPL DODAG Version Attacks
[ { "docid": "6fd71fe20e959bfdde866ff54b2b474b", "text": "The IETF developed the RPL routing protocol for Low power and Lossy Networks (LLNs). RPL allows for automated setup and maintenance of the routing tree for a meshed network using a common objective, such as energy preservation or most stable routes. To handle failing nodes and other communication disturbances, RPL includes a number of error correction functions for such situations. These error handling mechanisms, while maintaining a functioning routing tree, introduce an additional complexity to the routing process. Being a relatively new protocol, the effect of the error handling mechanisms within RPL needs to be analyzed. This paper presents an experimental analysis of RPL’s error correction mechanisms by using the Contiki RPL implementation along with an SNMP agent to monitor the performance of RPL.", "title": "" } ]
[ { "docid": "a1ca37cbed2163b4a6a8a339c3d18c98", "text": "We propose a data-driven method for designing 3D models that can be fabricated. First, our approach converts a collection of expert-created designs to a dataset of parameterized design templates that includes all information necessary for fabrication. The templates are then used in an interactive design system to create new fabri-cable models in a design-by-example manner. A simple interface allows novice users to choose template parts from the database, change their parameters, and combine them to create new models. Using the information in the template database, the system can automatically position, align, and connect parts: the system accomplishes this by adjusting parameters, adding appropriate constraints, and assigning connectors. This process ensures that the created models can be fabricated, saves the user from many tedious but necessary tasks, and makes it possible for non-experts to design and create actual physical objects. To demonstrate our data-driven method, we present several examples of complex functional objects that we designed and manufactured using our system.", "title": "" }, { "docid": "e5c76ea59f7de3a2351823347b4b126c", "text": "We present a deformation-driven approach to topology-varying 3D shape correspondence. In this paradigm, the best correspondence between two shapes is the one that results in a minimal-energy, possibly topology-varying, deformation that transforms one shape to conform to the other while respecting the correspondence. Our deformation model, called GeoTopo transform, allows both geometric and topological operations such as part split, duplication, and merging, leading to fine-grained and piecewise continuous correspondence results. The key ingredient of our correspondence scheme is a deformation energy that penalizes geometric distortion, encourages structure preservation, and simultaneously allows topology changes. This is accomplished by connecting shape parts using structural rods, which behave similarly to virtual springs but simultaneously allow the encoding of energies arising from geometric, structural, and topological shape variations. Driven by the combined deformation energy, an optimal shape correspondence is obtained via a pruned beam search. We demonstrate our deformation-driven correspondence scheme on extensive sets of man-made models with rich geometric and topological variation and compare the results to state-of-the-art approaches.", "title": "" }, { "docid": "7a6f97457f70e2d7dbcd488f9ed6c390", "text": "This paper proposes a novel participant selection framework, named CrowdRecruiter, for mobile crowdsensing. CrowdRecruiter operates on top of energy-efficient Piggyback Crowdsensing (PCS) task model and minimizes incentive payments by selecting a small number of participants while still satisfying probabilistic coverage constraint. In order to achieve the objective when piggybacking crowdsensing tasks with phone calls, CrowdRecruiter first predicts the call and coverage probability of each mobile user based on historical records. It then efficiently computes the joint coverage probability of multiple users as a combined set and selects the near-minimal set of participants, which meets coverage ratio requirement in each sensing cycle of the PCS task. We evaluated CrowdRecruiter extensively using a large-scale real-world dataset and the results show that the proposed solution significantly outperforms three baseline algorithms by selecting 10.0% -- 73.5% fewer participants on average under the same probabilistic coverage constraint.", "title": "" }, { "docid": "72173ef38d5fd62f73de467e722f970e", "text": "This study uses data collected from adult U.S. residents in 2004 and 2005 to examine whether loneliness and life satisfaction are associated with time spent at home on various Internet activities. Cross-sectional models reveal that time spent browsing the web is positively related to loneliness and negatively related to life satisfaction. Some of the relationships revealed by cross-sectional models persist even when considering the same individuals over time in fixed-effects models that account for time-invariant, individual-level characteristics. Our results vary according to how the time use data were collected, indicating that survey design can have important consequences for research in this area. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b79575908a84a015c8a83d35c63e4f06", "text": "This study examines the relation between stress and illness among bus drivers in a large American city. Several factors are identified that predict stress-related ill health for this occupational group. Canonical correlation techniques are used to combine daily work stress and recent stressful life events into a single life/work stress variate. Likewise, somatic symptoms and serious illness reports are combined into a single canonical illness variate. This procedure simplifies the analysis of multiple stress and illness indicators and also permits the statistical control of potential contaminating influences on stress and illness measures (eg, neuroticism). Discriminant function analysis identified four variables that differentiate bus drivers who get ill under high stress (N = 137) from those who remain healthy under stress (N = 137). Highly stressed and ill bus drivers use more avoidance coping behaviors, report more illness in their family medical histories, are low in the disposition of \"personality hardiness,\" and are also low in social assets. The derived stepwise discriminant function correctly classified 71% of cases in an independent \"hold-out\" sample. These results suggest fruitful areas of attention for health promotion and stress management programs in the public transit industry.", "title": "" }, { "docid": "2fbe9db6c676dd64c95e72e8990c63f0", "text": "Community detection is one of the most important problems in the field of complex networks in recent years. Themajority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improvedmulti-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.", "title": "" }, { "docid": "a9e9afef9505f5f4f397f2eb5c017213", "text": "business objects (Obj) 16 Basic abstract geoscience business objects, including projects, wells, well logs, markers, zones, and seismic data Concrete business object input/output (I/O) 15 Concrete input/output behaviors for abstract business object persistence mechanisms Data import/export filters (Filter) 14 Abstract factories, strategies, and GUI components for importing and exporting “foreign” data formats Application data factory (Data) 6 Abstract data object factory providing GUI components for user data selection, delivering observers of abstract business objects to application contexts Reusable GUI components (Component) 22 Reusable unit-labeling (feet/meters) text fields, data selection components, and calculation configuration components Application support components (Prefs) 15 User preference, window layout, and session save/reinstantiation support Application GUI (App) 73 User data display and editing, object property dialogs, and calculation setup dialogs and utilities that are not directly traceable to specific requirements (Tools). Estimation Study The estimation tools available to our development team included Cocomo II and Estimate Pro V2.0.2,3 (See the “Estimation Techniques” sidebar for a description of these and other estimator techniques.) Both of these tools generate estimates for the amount of effort, time, or cost, but they require input specifying the size of the work to be completed. The problem we faced throughout the project was estimating size. No one on the development team was trained in function-point analysis, so we loosely based our attempts at prediction on analogy methods and the Delphi principle of reaching consensus with individual, “expert” estimates. Without a defined, repeatable sizeestimation process, these predictions were little better than outright guesses. Finally, our failure to record our predictions—and subsequently compare the actual size against the N o v e m b e r / D e c e m b e r 2 0 0 0 I E E E S O F T W A R E 29 Software estimation techniques generally fall into one of three categories: ■ empirical techniques that relate observations of past performance to predictions of future efforts, ■ regression models that are derived from historical data and describe the mathematical relationships among project variables, and ■ theory-based techniques that are based on the underlying theoretical considerations of software development processes.1 For the purposes of this discussion, I merely draw a distinction between techniques that might help estimate size versus those used to estimate effort, schedule, and cost. A well-known and widely used regression technique is Cocomo (constructive cost model).2,3 The Cocomo II model estimates the effort, schedule, and cost required to develop a software product, accounting for different project phases and activities. This type of estimation method uses regression equations (developed from historical data) to compute schedule and cost by factoring in various project drivers such as team experience, the type of system under development, system size, nonfunctional product attributes, and so on. The SLIM (software lifecycle management) method4 is a theory-based technique1 that uses two equations to estimate development effort and schedule. The software equation, derived from empirical observations about productivity levels, expresses development effort in terms of project size and development time. The manpower equation expresses the buildup of manpower as a function of development time. Sizing techniques rely primarily on empirical methods. A few of these include the Standard and Wideband Delphi estimation methods, analogy techniques, the software sizing model (SSM), and function-point analysis. Observations and an understanding of historical project information can help predict the size of future efforts. The Delphi methods2,5 employ techniques for decomposing a project into individual work activities, letting a team of experts generate individual estimates for each activity and form a consensus estimate for the project. Estimation by analogy involves examining similarities and differences between former and current efforts and extrapolating the qualities of measured past work to future efforts. The SSM decomposes a project into individual modules and employs different methods to estimate the relative size of software modules through pair-wise comparisons, PERT (identifying the lowest, most likely, and highest possible sizes) estimates, sorting, and ranking techniques. The estimates are calibrated to local conditions by including at least two reference modules of known size. The technique generates a size for each module and for the overall project.6 Function-point analysis7 measures a software system’s size in terms of system functionality, independent of implementation language. The function-point method is considered an empirical estimation approach1 due to the observed relationship between the effort required to build a system and identifiable system features, such as external inputs, interface files, outputs, queries, and logical internal tables. Counts of system features are adjusted using weighting and complexity factors to arrive at a size expressed in function points. Although function-point analysis was originally developed in a world of database and procedural programming, the method has mapped well into the object-oriented development paradigm.8 References 1. R.E. Fairley, “Recent Advances in Software Estimation Techniques,” Proc. 14th Int’l Conf. Software Eng., ACM Press, New York, 1992. 2. B. Boehm, Software Engineering Economics, Prentice Hall, Upper Saddle River, N.J., 1981. 3. Barry Boehm et al., “Cost Models for Future Software Life Cycle Process: Cocomo 2.0,” Ann. of Software Eng. Special Volume on Software Process and Product Measurement, J.D. Arther and S.M. Henry, eds., Science Publishers, Amsterdam, The Netherlands, Vol. 1, 1995, pp. 45–60. 4. L.H. Putnam, “A General Empirical Solution to the Macro Software Sizing and Estimating Problem,” IEEE Trans. Software Eng., Vol. 4, No. 4, Apr. 1978, pp. 345–361. 5. K. Wiegers, “Stop Promising Miracles,” Software Development, Vol. 8, No. 2, Feb. 2000, p. 49. 6. G.J. Bozoki, “Performance Simulation of SSM (Software Sizing Model),” Proc. 13th Conf., Int’l Soc. of Parametric Analysts, Int’l Soc. of Parametric Analysts, New Orleans, 1991, pp. CM–14. 7. A. Albrecht, “Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation,” IEEE Trans. Software Eng., Vol. SE-9, No. 6, 1983. 8. T. Fetcke, A. Abran, and T.H. Nguyen, “Mapping the OO-Jacobson Approach into Function Point Analysis,” Proc. TOOLS-23’97, IEEE Press, Piscataway, N.J., 1998. Estimation Techniques", "title": "" }, { "docid": "850223b7efdea78735c8226582a2b67d", "text": "In this paper, the performance of Long Range (LoRa) Internet of Things (IoT) technology is investigated. By considering Chirp Spread Spectrum (CSS) technique of LoRa, an approximation of the Bit Error Rate (BER) is presented and evaluated through intensive simulations. Unlike previous works which present the BER of LoRa in terms of the ratio of energy ber bit to noise ratio only without any proofing, our presented work expresses BER in terms of LoRa's modulation patterns such as the spreading factor, the code rate, the symbol frequency and the SNR. Numerical results are carried out in order to investigate the LoRa performance and to illustrate the accuracy of the new BER expression.", "title": "" }, { "docid": "b0f396c692568194708a7cf6b8fce394", "text": "DreamCam is a modular smart camera constructed with the use of an FPGA like main processing board. The core of the camera is an Altera Cyclone-III associated with a CMOS imager and six private Ram blocks. The main novel feature of our work consists in proposing a new smart camera architecture and several modules (IP) to efficiently extract and sort the visual features in real time. In this paper, extraction is performed by a Harris and Stephen filtering associated with customized modules. These modules extract, select and sort visual features in real-time. As a result, DreamCam (with such a configuration) provides a description of each visual feature in the form of its position and the grey-level template around it.", "title": "" }, { "docid": "d0dd13964de87acab0f7fe76585d0bbf", "text": "The continual growth of electronic medical record (EMR) databases has paved the way for many data mining applications, including the discovery of novel disease-drug associations and the prediction of patient survival rates. However, these tasks are hindered because EMRs are usually segmented or incomplete. EMR analysis is further limited by the overabundance of medical term synonyms and morphologies, which causes existing techniques to mismatch records containing semantically similar but lexically distinct terms. Current solutions fill in missing values with techniques that tend to introduce noise rather than reduce it. In this paper, we propose to simultaneously infer missing data and solve semantic mismatching in EMRs by first integrating EMR data with molecular interaction networks and domain knowledge to build the HEMnet, a heterogeneous medical information network. We then project this network onto a low-dimensional space, and group entities in the network according to their relative distances. Lastly, we use this entity distance information to enrich the original EMRs. We evaluate the effectiveness of this method according to its ability to separate patients with dissimilar survival functions. We show that our method can obtain significant (p-value < 0.01) results for each cancer subtype in a lung cancer dataset, while the baselines cannot.", "title": "" }, { "docid": "4028f1eb3f14297fea30ae43fdf7fbb6", "text": "The optimisation of a tail-sitter UAV (Unmanned Aerial Vehicle) that uses a stall-tumble manoeuvre to transition from vertical to horizontal flight and a pull-up manoeuvre to regain the vertical is investigated. The tandem wing vehicle is controlled in the hover and vertical flight phases by prop-wash over wing mounted control surfaces. It represents an innovative and potentially simple solution to the dual requirements of VTOL (Vertical Take-off and Landing) and high speed forward flight by obviating the need for complex mechanical systems such as rotor heads or tilt-rotor systems.", "title": "" }, { "docid": "b9dcc111261fa97e2d36b9a536a5861d", "text": "We present the first open-source tool for annotating morphosyntactic tense, mood and voice for English, French and German verbal complexes. The annotation is based on a set of language-specific rules, which are applied on dependency trees and leverage information about lemmas, morphological properties and POS-tags of the verbs. Our tool has an average accuracy of about 76%. The tense, mood and voice features are useful both as features in computational modeling and for corpuslinguistic research.", "title": "" }, { "docid": "9223d7f1f5434b6b66306da868ac063a", "text": "This article proposes a queueing and patient pooling approach for radiotherapy capacity allocation with heterogeneous treatment machines called linear accelerators (LINACs), different waiting time targets (WTTs), and treatment protocols. We first propose a novel queueing framework with LINAC time slots as servers. This framework leads to simple single-class queues for the evaluation of WTT satisfaction that would, otherwise, require an analysis of complicated multiclass queues with reentrance. Mixed-integer programming models are proposed for capacity allocation and case-mix optimization under WTT constraints. We then extend the queueing framework by pooling the basic patient types into groups sharing the same slot servers. A mathematical programming model and a pairwise merging heuristic are proposed for patient pooling optimization to minimize the overall LINAC capacity needed to meet all WTT requirements. Extended numerical experiments are conducted to assess the efficiency of our approach and to show the properties of optimal capacity allocation and patient pooling.", "title": "" }, { "docid": "e426f588af20778d069e8298c1e1a07f", "text": "Objectives and Scope A major question that arises in many areas of Cognitive Science is the need to distinguish true causal connections between variables from mere correlations. The most common way of addressing this distinction is the design of wellcontrolled experiments. However, in many situations, it is extremely difficult –or even outright impossible– to perform such experiments. Researchers are then forced to rely on correlational data in order to make causal inferences. This situation is especially common when one needs to analyze longitudinal data corresponding to historical time-series, symbolic sequences, or developmental data. These inferences are often very problematic. From the correlations alone it is difficult to determine the direction of the causal arrow linking two variables. Worse even, the lack of controls of observational data entail that correlations found between two variables need not reflect any causal connection between them. The possibility always remains that some third variable which the researchers were not able to measure, or were actually unaware of, is the actually driver for both measured variables, giving rise to the mirage of a direct relationship between them. In recent years, it has been shown that, under particular circumstances, one can use correlational information for making sound causal inferences (cf., Pearl, 2000). In this tutorial I will provide a hands-on introduction to the use of modern causality techniques for the analysis of observational time series. I will cover causality analyses for three types of time-series that are often encountered in Cognitive Science research:", "title": "" }, { "docid": "c675a2f1fed4ccb5708be895190b02cd", "text": "Decompilation is important for many security applications; it facilitates the tedious task of manual malware reverse engineering and enables the use of source-based security tools on binary code. This includes tools to find vulnerabilities, discover bugs, and perform taint tracking. Recovering high-level control constructs is essential for decompilation in order to produce structured code that is suitable for human analysts and sourcebased program analysis techniques. State-of-the-art decompilers rely on structural analysis, a pattern-matching approach over the control flow graph, to recover control constructs from binary code. Whenever no match is found, they generate goto statements and thus produce unstructured decompiled output. Those statements are problematic because they make decompiled code harder to understand and less suitable for program analysis. In this paper, we present DREAM, the first decompiler to offer a goto-free output. DREAM uses a novel patternindependent control-flow structuring algorithm that can recover all control constructs in binary programs and produce structured decompiled code without any goto statement. We also present semantics-preserving transformations that can transform unstructured control flow graphs into structured graphs. We demonstrate the correctness of our algorithms and show that we outperform both the leading industry and academic decompilers: Hex-Rays and Phoenix. We use the GNU coreutils suite of utilities as a benchmark. Apart from reducing the number of goto statements to zero, DREAM also produced more compact code (less lines of code) for 72.7% of decompiled functions compared to Hex-Rays and 98.8% compared to Phoenix. We also present a comparison of Hex-Rays and DREAM when decompiling three samples from Cridex, ZeusP2P, and SpyEye malware families.", "title": "" }, { "docid": "83f1fc22d029b3a424afcda770a5af23", "text": "Three species of Xerolycosa: Xerolycosa nemoralis (Westring, 1861), Xerolycosa miniata (C.L. Koch, 1834) and Xerolycosa mongolica (Schenkel, 1963), occurring in the Palaearctic Region are surveyed, illustrated and redescribed. Arctosa mongolica Schenkel, 1963 is removed from synonymy with Xerolycosa nemoralis and transferred to Xerolycosa, and the new combination Xerolycosa mongolica (Schenkel, 1963) comb. n. is established. One new synonymy, Xerolycosa undulata Chen, Song et Kim, 1998 syn.n. from Heilongjiang = Xerolycosa mongolica (Schenkel, 1963), is proposed. In addition, one more new combination is established, Trochosa pelengena (Roewer, 1960) comb. n., ex Xerolycosa.", "title": "" }, { "docid": "ac3d0b26fbb07df20a7f2cd7e2400039", "text": "Motion sensors as inertial measurement units (IMU) are widely used in robotics, for instance in the navigation and mapping tasks. Nowadays, many low cost micro electro mechanical systems (MEMS) based IMU are available off the shelf, while smartphones and similar devices are almost always equipped with low-cost embedded IMU sensors. Nevertheless, low cost IMUs are affected by systematic error given by imprecise scaling factors and axes misalignments that decrease accuracy in the position and attitudes estimation. In this paper, we propose a robust and easy to implement method to calibrate an IMU without any external equipment. The procedure is based on a multi-position scheme, providing scale and misalignments factors for both the accelerometers and gyroscopes triads, while estimating the sensor biases. Our method only requires the sensor to be moved by hand and placed in a set of different, static positions (attitudes). We describe a robust and quick calibration protocol that exploits an effective parameterless static filter to reliably detect the static intervals in the sensor measurements, where we assume local stability of the gravity's magnitude and stable temperature. We first calibrate the accelerometers triad taking measurement samples in the static intervals. We then exploit these results to calibrate the gyroscopes, employing a robust numerical integration technique. The performances of the proposed calibration technique has been successfully evaluated via extensive simulations and real experiments with a commercial IMU provided with a calibration certificate as reference data.", "title": "" }, { "docid": "3264b3fb1737be4f77aea8803daa2b27", "text": "Long Short-Term Memory (LSTM) is a deep recurrent neural network architecture with high computational complexity. Contrary to the standard practice to train LSTM online with stochastic gradient descent (SGD) methods, we propose a matrix-based batch learning method for LSTM with full Backpropagation Through Time (BPTT). We further solve the state drifting issues as well as improving the overall performance for LSTM using revised activation functions for gates. With these changes, advanced optimization algorithms are applied to LSTM with long time dependency for the first time and show great advantages over SGD methods. We further demonstrate that large-scale LSTM training can be greatly accelerated with parallel computation architectures like CUDA and MapReduce.", "title": "" }, { "docid": "2f2ee6a0134d7bfc9a619e7e5dd043a1", "text": "Biometric technology offers an advanced verification of human identity used in most schools and companies for recording the daily attendance (login and logout) and generating the payroll of the employees. This study uses the biometric technology to address the problems of many companies or institutions such as employees doing the proxy attendance for their colleagues, stealing company time, putting in more time in the daily time record (DTR), and increasing the amount of gross payroll resulted of buddy punching. The researcher developed a system for employee’s attendance and processing of payroll with the use of fingerprint reader and the webcam device. The employee uses one finger to record his or her time of arrival and departure from the office through the use of the fingerprint reader. The DTR of employees is recorded correctly by the system; the tardiness and under time in the morning and in the afternoon of their official time is also computed. The system was developed using the Microsoft Visual C# 2008 programming language, MySQL 5.1 database software, and Software Development Kit (SDK) for the fingerprint reader and the webcam device. The data were analyzed using the percentage technique and arithmetic mean. The study was tested for 30 employees using the fingerprint reader for biometric fingerprint scanning (login and logout), and 50 employees were recorded and used for processing the payroll, and the proposed system. Results of biometric fingerprint scanning for the login and logout revealed that 90% of the employees have been accepted for the first attempt, 5.84% for the second attempt, 3.33% and 0.83% for the third and more than four attempts, respectively. The result of processing the advanced payroll (permanent, substitute, temporary & casual employees) and regular payroll (job order and contract of service employees) is 17.07 s and 5.08 s respectively. The Employee Attendance and Payroll System (EAPS) showed that the verification and identification of the employees in the school campus using the biometric technology provides a reliable and accurate recording in the daily attendance, and generate effectively the monthly payroll.", "title": "" }, { "docid": "9e6bfc7b5cc87f687a699c62da013083", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" } ]
scidocsrr
e39b08f10862b0d670b6047728f333a9
Deep recurrent neural networks for predicting intraoperative and postoperative outcomes and trends
[ { "docid": "386cd963cf70c198b245a3251c732180", "text": "Support vector machines (SVMs) are promising methods for the prediction of -nancial timeseries because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in -nancial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction. c © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c247cca9f592cbc274c989bff1586ab9", "text": "We demonstrate a simple strategy to cope with missing data in sequential inputs, addressing the task of multilabel classification of diagnoses given clinical time series. Collected from the intensive care unit (ICU) of a major urban medical center, our data consists of multivariate time series of observations. The data is irregularly sampled, leading to missingness patterns in re-sampled sequences. In this work, we show the remarkable ability of RNNs to make effective use of binary indicators to directly model missing data, improving AUC and F1 significantly. However, while RNNs can learn arbitrary functions of the missing data and observations, linear models can only learn substitution values. For linear models and MLPs, we show an alternative strategy to capture this signal. Additionally, we evaluate LSTMs, MLPs, and linear models trained on missingness patterns only, showing that for several diseases, what tests are run can be more predictive than the results themselves.", "title": "" } ]
[ { "docid": "dbc28fb8fe14ac5fcfe5a1c52df5b8f0", "text": "Wireless Local Area Networks frequently referred to as WLANs or Wi-Fi networks are all the vehemence in recent times. People are installing these in houses, institutions, offices and hotels etc, without any vain. In search of fulfilling the wireless demands, Wi-Fi product vendors and service contributors are exploding up as quickly as possible. Wireless networks offer handiness, mobility, and can even be less expensive to put into practice than wired networks in many cases. With the consumer demand, vendor solutions and industry standards, wireless network technology is factual and is here to stay. But how far this technology is going provide a protected environment in terms of privacy is again an anonymous issue. Realizing the miscellaneous threats and vulnerabilities associated with 802.11-based wireless networks and ethically hacking them to make them more secure is what this paper is all about. On this segment, we'll seize a look at common threats, vulnerabilities related with wireless networks. And also we have discussed the entire process of cracking WEP (Wired Equivalent Privacy) encryption of WiFi, focusing the necessity to become familiar with scanning tools like Cain, NetStumbler, Kismet and MiniStumbler to help survey the area and tests we should run so as to strengthen our air signals.", "title": "" }, { "docid": "13211210ca0a3fda62fd44383eca6b52", "text": "Cancer is the most important cause of death for both men and women. The early detection of cancer can be helpful in curing the disease completely. So the requirement of techniques to detect the occurrence of cancer nodule in early stage is increasing. A disease that is commonly misdiagnosed is lung cancer. Earlier diagnosis of Lung Cancer saves enormous lives, failing which may lead to other severe problems causing sudden fatal end. Its cure rate and prediction depends mainly on the early detection and diagnosis of the disease. One of the most common forms of medical malpractices globally is an error in diagnosis. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, Decision tree, Naïve Bayes and Artificial Neural Network to massive volume of healthcare data. The healthcare industry collects huge amounts of healthcare data which, unfortunately, are not “mined” to discover hidden information. For data preprocessing and effective decision making One Dependency Augmented Naïve Bayes classifier (ODANB) and naive creedal classifier 2 (NCC2) are used. This is an extension of naïve Bayes to imprecise probabilities that aims at delivering robust classifications also when dealing with small or incomplete data sets. Discovery of hidden patterns and relationships often goes unexploited. Diagnosis of Lung Cancer Disease can answer complex “what if” queries which traditional decision support systems cannot. Using generic lung cancer symptoms such as age, sex, Wheezing, Shortness of breath, Pain in shoulder, chest, arm, it can predict the likelihood of patients getting a lung cancer disease. Aim of the paper is to propose a model for early detection and correct diagnosis of the disease which will help the doctor in saving the life of the patient. Keywords—Lung cancer, Naive Bayes, ODANB, NCC2, Data Mining, Classification.", "title": "" }, { "docid": "1298ddbeea84f6299e865708fd9549a6", "text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.", "title": "" }, { "docid": "2445b8d7618c051acd743f65ef6f588a", "text": "Recent developments in analysis methods on the non-linear and non-stationary data have received large attention by the image analysts. In 1998, Huang introduced the empirical mode decomposition (EMD) in signal processing. The EMD approach, fully unsupervised, proved reliable monodimensional (seismic and biomedical) signals. The main contribution of our approach is to apply the EMD to texture extraction and image filtering, which are widely recognized as a difficult and challenging computer vision problem. We developed an algorithm based on bidimensional empirical mode decomposition (BEMD) to extract features at multiple scales or spatial frequencies. These features, called intrinsic mode functions, are extracted by a sifting process. The bidimensional sifting process is realized using morphological operators to detect regional maxima and thanks to radial basis function for surface interpolation. The performance of the texture extraction algorithms, using BEMD method, is demonstrated in the experiment with both synthetic and natural images. q 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6c893b6c72f932978a996b6d6283bc02", "text": "Deep metric learning aims to learn an embedding function, modeled as deep neural network. This embedding function usually puts semantically similar images close while dissimilar images far from each other in the learned embedding space. Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.", "title": "" }, { "docid": "9e0ded0d1f913dce7d0ea6aab115678c", "text": "DevOps is changing the way organizations develop and deploy applications and service customers. Many organizations want to apply DevOps, but they are concerned by the security aspects of the produced software. This has triggered the creation of the terms SecDevOps and DevSecOps. These terms refer to incorporating security practices in a DevOps environment by promoting the collaboration between the development teams, the operations teams, and the security teams. This paper surveys the literature from academia and industry to identify the main aspects of this trend. The main aspects that we found are: definition, security best practices, compliance, process automation, tools for SecDevOps, software configuration, team collaboration, availability of activity data and information secrecy. Although the number of relevant publications is low, we believe that the terms are not buzzwords, they imply important challenges that the security and software communities shall address to help organizations develop secure software while applying DevOps processes.", "title": "" }, { "docid": "dc817bc11276d76f8d97f67e4b1b2155", "text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.", "title": "" }, { "docid": "bfd7c204dec258679e15ce477df04cad", "text": "Clarification is needed regarding the definitions and classification of groove and hollowness of the infraorbital region depending on the cause, anatomical characteristics, and appearance. Grooves in the infraorbital region can be classified as nasojugal grooves (or folds), tear trough deformities, and palpebromalar grooves; these can be differentiated based on anatomical characteristics. They are caused by the herniation of intraorbital fat, atrophy of the skin and subcutaneous fat, contraction of the orbital part of the orbicularis oculi muscle or squinting, and malar bone resorption. Safe and successful treatment requires an optimal choice of filler and treatment method. The choice between a cannula and needle depends on various factors; a needle is better for injections into a subdermal area in a relatively safe plane, while a cannula is recommended for avoiding vascular compromise when injecting filler into a deep fat layer and releasing fibrotic ligamentous structures. The injection of a soft-tissue filler into the subcutaneous fat tissue is recommended for treating mild indentations around the orbital rim and nasojugal region. Reducing the tethering effect of ligamentous structures by undermining using a cannula prior to the filler injection is recommended for treating relatively deep and fine indentations. The treatment of mild prolapse of the intraorbital septal fat or broad flattening of the infraorbital region can be improved by restoring the volume deficiency using a relatively firm filler.", "title": "" }, { "docid": "78e8f84224549b75584c59591a8febef", "text": "Our goal is to design architectures that retain the groundbreaking performance of Convolutional Neural Networks (CNNs) for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. (e) We further provide additional results for the problem of facial part segmentation. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks.", "title": "" }, { "docid": "424239765383edd8079d90f63b3fde1d", "text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.", "title": "" }, { "docid": "665f109e8263b687764de476befcbab9", "text": "In this work we analyze the behavior on a company-internal social network site to determine which interaction patterns signal closeness between colleagues. Regression analysis suggests that employee behavior on social network sites (SNSs) reveals information about both professional and personal closeness. While some factors are predictive of general closeness (e.g. content recommendations), other factors signal that employees feel personal closeness towards their colleagues, but not professional closeness (e.g. mutual profile commenting). This analysis contributes to our understanding of how SNS behavior reflects relationship multiplexity: the multiple facets of our relationships with SNS connections.", "title": "" }, { "docid": "982df058d920dbb8b2c9d012b50b62a3", "text": "A recommendation system tracks past purchases of a group of users to make product recommendations to individual members of the group. In this paper we present a notion of competitive recommendation systems, building on recent theoretical work on this subject. We reduce the problem of achieving competitiveness to a problem in matrix reconstruction. We then present a matrix reconstruction scheme that is competitive: it requires a small overhead in the number of users and products to be sampled, delivering in the process a net utility that closely approximates the best possible with full knowledge of all user-product preferences.", "title": "" }, { "docid": "99c1ad04419fa0028724a26e757b1b90", "text": "Contrary to popular belief, despite decades of research in fingerprints, reliable fingerprint recognition is still an open problem. Extracting features out of poor quality prints is the most challenging problem faced in this area. This paper introduces a new approach for fingerprint enhancement based on Short Time Fourier Transform(STFT) Analysis. STFT is a well known technique in signal processing to analyze non-stationary signals. Here we extend its application to 2D fingerprint images. The algorithm simultaneously estimates all the intrinsic properties of the fingerprints such as the foreground region mask, local ridge orientation and local ridge frequency. Furthermore we propose a probabilistic approach of robustly estimating these parameters. We experimentally compare the proposed approach to other filtering approaches in literature and show that our technique performs favorably.", "title": "" }, { "docid": "9d55947637b358c4dc30d7ba49885472", "text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;", "title": "" }, { "docid": "8996068836559be2b253cd04aeaa285b", "text": "We present AutonoVi-Sim, a novel high-fidelity simulation platform for autonomous driving data generation and driving strategy testing. AutonoVi-Sim is a collection of high-level extensible modules which allows the rapid development and testing of vehicle configurations and facilitates construction of complex traffic scenarios. Autonovi-Sim supports multiple vehicles with unique steering or acceleration limits, as well as unique tire parameters and dynamics profiles. Engineers can specify the specific vehicle sensor systems and vary time of day and weather conditions to generate robust data and gain insight into how conditions affect the performance of a particular algorithm. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians, allowing engineers to specify routes for these actors, or to create scripted scenarios which place the vehicle in dangerous reactive situations. Autonovi-Sim facilitates training of deep-learning algorithms by enabling data export from the vehicle's sensors, including camera data, LIDAR, relative positions of traffic participants, and detection and classification results. Thus, AutonoVi-Sim allows for the rapid prototyping, development and testing of autonomous driving algorithms under varying vehicle, road, traffic, and weather conditions. In this paper, we detail the simulator and provide specific performance and data benchmarks.", "title": "" }, { "docid": "2560535c3ad41b46e08b8b39f89f555b", "text": "Crises are unpredictable events that can impact on an organisation’s viability, credibility, and reputation, and few topics have generated greater interest in communication over the past 15 years. This paper builds on early theory such as Fink (1986), and extends the crisis life-cycle theoretical model to enable a better understanding and prediction of the changes and trends of mass media coverage during crises. This expanded model provides a framework to identify and understand the dynamic and multi-dimensional set of relationships that occurs during the crisis life cycle in a rapidly changing and challenging operational environment. Using the 2001 Ansett Airlines’ Easter groundings as a case study, this paper monitors mass media coverage during this organisational crisis. The analysis reinforces the view that, by using proactive strategies, public relations practitioners can better manage mass media crisis coverage. Further, the understanding gained by extending the crisis life cycle to track when and how mass media content changes may help public relations practitioners craft messages and supply information at the outset of each stage of the crisis, thereby maintaining control of the message.", "title": "" }, { "docid": "21df2b20c9ecd6831788e00970b3ca79", "text": "Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.", "title": "" }, { "docid": "48a0e75b97fdaa734f033c6b7791e81f", "text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.", "title": "" }, { "docid": "50afcbdf0482c75ae41afd8525274933", "text": "Adhesive devices of digital pads of gecko lizards are formed by microscopic hair-like structures termed setae that derive from the interaction between the oberhautchen and the clear layer of the epidermis. The two layers form the shedding complex and permit skin shedding in lizards. Setae consist of a resistant but flexible corneous material largely made of keratin-associated beta-proteins (KA beta Ps, formerly called beta-keratins) of 8-22 kDa and of alpha-keratins of 45-60 kDa. In Gekko gecko, 19 sauropsid keratin-associated beta-proteins (sKAbetaPs) and at least two larger alpha-keratins are expressed in the setae. Some sKA beta Ps are rich in cysteine (111-114 amino acids), while others are rich in glycine (169-219 amino acids). In the entire genome of Anolis carolinensis 40 Ka beta Ps are present and participate in the formation of all types of scales, pad lamellae and claws. Nineteen sKA beta Ps comprise cysteine-rich 9.2-14.4 kDa proteins of 89-142 amino acids, and 19 are glycine-rich 16.5-22.0 kDa proteins containing 162-225 amino acids, and only two types of sKA beta Ps are cysteine- and glycine-poor proteins. Genes coding for these proteins contain an intron in the 5'-non-coding region, a typical characteristic of most sauropsid Ka beta Ps. Gecko KA beta Ps show a central amino acid region of high homology and a beta-pleated conformation that is likely responsible for the polymerization of Ka beta Ps into long and resistant filaments. The association of numerous filaments, probably over a framework of alpha-keratins, permits the formation of bundles of corneous material for the elongation of setae, which may be over 100 microm long. The terminals branching off each seta may derive from the organization of the cytoskeleton and from the mechanical separation of keratin bundles located at the terminal apex of setae.", "title": "" }, { "docid": "be7f0079a3462e9cf81d44002b8a340e", "text": "Long-term participation in creative activities has benefits for middle-aged and older people that may improve their adaptation to later life. We first investigated the factor structure of the Creative Benefits Scale and then used it to construct a model to help explain the connection between generativity and life satisfaction in adults who participated in creative hobbies. Participants included 546 adults between the ages of 40 and 88 (Mean = 58.30 years) who completed measures of life satisfaction, generativity, and the Creative Benefits Scale with its factors of Identity, Calming, Spirituality, and Recognition. Structural equation modeling was used to examine the connection of age with life satisfaction in older adults and to explore the effects of creativity on this relation. The proposed model of life satisfaction, incorporating age, creativity, and generativity, fit the data well, indicating that creativity may help explain the link between the generativity and life satisfaction.", "title": "" } ]
scidocsrr
9673f0309bf8d1c76e00a2e66f9a7863
Data Mining of Online Genealogy Datasets for Revealing Lifespan Patterns in Human Population
[ { "docid": "8ddf705b1fdd09f33870e940f19aa0e2", "text": "BACKGROUND\nThe prevalence of obesity has increased substantially over the past 30 years. We performed a quantitative analysis of the nature and extent of the person-to-person spread of obesity as a possible factor contributing to the obesity epidemic.\n\n\nMETHODS\nWe evaluated a densely interconnected social network of 12,067 people assessed repeatedly from 1971 to 2003 as part of the Framingham Heart Study. The body-mass index was available for all subjects. We used longitudinal statistical models to examine whether weight gain in one person was associated with weight gain in his or her friends, siblings, spouse, and neighbors.\n\n\nRESULTS\nDiscernible clusters of obese persons (body-mass index [the weight in kilograms divided by the square of the height in meters], > or =30) were present in the network at all time points, and the clusters extended to three degrees of separation. These clusters did not appear to be solely attributable to the selective formation of social ties among obese persons. A person's chances of becoming obese increased by 57% (95% confidence interval [CI], 6 to 123) if he or she had a friend who became obese in a given interval. Among pairs of adult siblings, if one sibling became obese, the chance that the other would become obese increased by 40% (95% CI, 21 to 60). If one spouse became obese, the likelihood that the other spouse would become obese increased by 37% (95% CI, 7 to 73). These effects were not seen among neighbors in the immediate geographic location. Persons of the same sex had relatively greater influence on each other than those of the opposite sex. The spread of smoking cessation did not account for the spread of obesity in the network.\n\n\nCONCLUSIONS\nNetwork phenomena appear to be relevant to the biologic and behavioral trait of obesity, and obesity appears to spread through social ties. These findings have implications for clinical and public health interventions.", "title": "" } ]
[ { "docid": "2b7f3b4d099d447f6fd5dc13d75fa44d", "text": "Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.", "title": "" }, { "docid": "1efee2d22c2f982ba94d874e061adc7d", "text": "A PWM plus phase-shift control bidirectional DC-DC converter is proposed. In this converter, PWM control and phase-shift control are combined to reduce current stress and conducting loss, and to expand ZVS range. The operation principle and analysis of the converter are explained, and ZVS condition is derived. A prototype of PWM plus phase-shift bidirectional DC-DC converter is built to verify analysis.", "title": "" }, { "docid": "255a155986548bb873ee0bc88a00222b", "text": "Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.", "title": "" }, { "docid": "baf3101f70784ff4dfb85cda627575e7", "text": "Synchronous reluctance (SynRel) machines are gaining more and more importance in various fields of application thanks to their known merits like rugged construction, high efficiency, absence of field windings, and no or reduced need for permanent magnets. Out of the possible design variants, in this paper, SynRel motors with uniform mechanical air gap and circularly shaped flux barriers are considered and a conformal-mapping approach to their analytical modeling and simulation is proposed. A suitable conformal transformation is introduced to compute the reluctance of each rotor circularly shaped flux barrier and the result is then used to analytically determine the air-gap flux density distribution and the electromagnetic torque of the machine in arbitrary operating conditions. The accuracy of the methodology proposed is assessed against finite element analysis.", "title": "" }, { "docid": "14fc2ae5f303f8de2a54701a45737d30", "text": "There are many challenges people face while learning and writing English. Learning the proper use of punctuation is challenging for people of all ages. Due to enormous breakthroughs in computer technology, there has been a huge interest in the development of Intelligent Tutoring Systems (ITS) to help learners learn a language. We developed an ITS that helps students to learn the rules of English punctuation where students interactively punctuate an unpunctuated piece of sentence. The system offers text to student, in which they can perform the punctuation, and if necessary, make corrections. A student's answer is then analyzed by the system with possible correct solutions/hints, and gives specific feedback. The ITS was evaluated during several sessions for 10–11 year old school children. The evaluation results show that the children effectively acquired the skillset of comma and full stop rules represented in the system.", "title": "" }, { "docid": "5694ebf4c1f1e0bf65dd7401d35726ed", "text": "Data collection is not a big issue anymore with available honeypot software and setups. However malware collections gathered from these honeypot systems often suffer from massive sample counts, data analysis systems like sandboxes cannot cope with. Sophisticated self-modifying malware is able to generate new polymorphic instances of itself with different message digest sums for each infection attempt, thus resulting in many different samples stored for the same specimen. Scaling analysis systems that are fed by databases that rely on sample uniqueness based on message digests is only feasible to a certain extent. In this paper we introduce a non cryptographic, fast to calculate hash function for binaries in the Portable Executable format that transforms structural information about a sample into a hash value. Grouping binaries by hash values calculated with the new function allows for detection of multiple instances of the same polymorphic specimen as well as samples that are broken e.g. due to transfer errors. Practical evaluation on different malware sets shows that the new function allows for a significant reduction of sample counts.", "title": "" }, { "docid": "f9aa7a61863160fbffa5e9244146e178", "text": "PURPOSE\nThere is a growing interest in the use of bedside ultrasonography to assess gastric content and volume. It has been suggested that the gastric antrum in particular can be assessed reliably by sonography. The aim of this observational study was to provide a qualitative description of the sonographic characteristics of the gastric antrum when the stomach is empty and following the ingestion of clear fluid, milk, and solid content.\n\n\nCLINICAL FEATURES\nSix healthy volunteers were examined on four different occasions (24 scanning sessions): following a period of eight hours of fast and following ingestion of 200 mL of apple juice, 200 mL of 2% milk, and a standard solid meal (sandwich and apple juice). Examinations were performed following a standardized scanning protocol by two clinical anesthesiologists with previous experience in gastric sonography. For each type of gastric content, the sonographic characteristics of the antrum and its content are described and illustrated with figures.\n\n\nCONCLUSIONS\nBedside sonography can determine the nature of gastric content (nil, clear fluid, thick fluid/solid). This qualitative information by itself may be useful to assess risk of aspiration, particularly in situations when prandial status is unknown or uncertain.", "title": "" }, { "docid": "f2ad7ee516b673dffbf3d693118ec142", "text": "This paper discusses the security of data in cloud computing. It is a study of data in the cloud and aspects related to it concerning security. The paper will go in to details of data protection methods and approaches used throughout the world to ensure maximum data protection by reducing risks and threats. Availability of data in the cloud is beneficial for many applications but it poses risks by exposing data to applications which might already have security loopholes in them. Similarly, use of virtualization for cloud computing might risk data when a guest OS is run over a hypervisor without knowing the reliability of the guest OS which might have a security loophole in it. The paper will also provide an insight on data security aspects for Data-in-Transit and Data-at-Rest. The study is based on all the levels of SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service).", "title": "" }, { "docid": "da7d45d2cbac784d31e4d3957f4799e6", "text": "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5% out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.", "title": "" }, { "docid": "2d105fcec4109a6bc290c616938012f3", "text": "One of the biggest challenges in automated driving is the ability to determine the vehicleâĂŹs location in realtime - a process known as self-localization or ego-localization. An automated driving system must be reliable under harsh conditions and environmental uncertainties (e.g. GPS denial or imprecision), sensor malfunction, road occlusions, poor lighting, and inclement weather. To cope with this myriad of potential problems, systems typically consist of a GPS receiver, in-vehicle sensors (e.g. cameras and LiDAR devices), and 3D High-Definition (3D HD) Maps. In this paper, we review state-of-the-art self-localization techniques, and present a benchmark for the task of image-based vehicle self-localization. Our dataset was collected on 10km of the Warren Freeway in the San Francisco Area under reasonable traffic and weather conditions. As input to the localization process, we provide timestamp-synchronized, consumer-grade monocular video frames (with camera intrinsic parameters), consumer-grade GPS trajectory, and production-grade 3D HD Maps. For evaluation, we provide survey-grade GPS trajectory. The goal of this dataset is to standardize and formalize the challenge of accurate vehicle self-localization and provide a benchmark to develop and evaluate algorithms.", "title": "" }, { "docid": "f03c4718a0d85917ea870a90c9bb05c5", "text": "Conventional time-delay estimators exhibit dramatic performance degradations in the presence of multipath signals. This limits their application in reverberant enclosures, particularly when the signal of interest is speech and it may not possible to estimate and compensate for channel effects prior to time-delay estimation. This paper details an alternative approach which reformulates the problem as a linear regression of phase data and then estimates the time-delay through minimization of a robust statistical error measure. The technique is shown to be less susceptible to room reverberation effects. Simulations are performed across a range of source placements and room conditions to illustrate the utility of the proposed time-delay estimation method relative to conventional methods.", "title": "" }, { "docid": "d8acad39fccbe5a3d060d228234ee65a", "text": "Voice communication over internet not be possible without a reliable data network, this was first available when distributed network topologies were used in conjunction with data packets. Early network used single centre node network in which a single workstation (Server) is responsible for the communication. This posed problems as if there was a fault with the centre node, (workstation) nothing would work. This problem was solved by the distributed system in which reliability increases by spreading the load between many nodes. The idea of packet switching & distributed network were combined, this combination were increased reliability, speed & responsible for voice communication over internet. Voice-over-IP (VoIP) These data packets travel through a packet-switched network such as the Internet and arrive at their destination where they are decompressed using a compatible Codec (audio coder/decoder) and converted back to analogue audio. This paper deals with the Simulink architecture for VoIP network. KeywordsVoIP, G.711, Wave file", "title": "" }, { "docid": "a3dc04fe9478f881608289ae13e979cb", "text": "Background: The white matter of the cerebellum has a population of GFAP+ cells with neurogenic potential restricted to early postnatal development (P2-P12), these astrocytes are the precursors of stellate cells and basket cells in the molecular layer. On the other hand, GABA is known to serve as a feedback regulator of neural production and migration through tonic activation of GABA-A receptors. Aim: To investigate the functional expression of GABA-A receptors in the cerebellar white matter astrocytes at P7-9 and P18-20. Methods: Immunofluorescence for α1, α2, β1 subunits & GAD67 enzyme in GFAP-EGFP mice (n=10 P8; n= 8 P18). Calcium Imaging: horizontal acute slices were incubated with Fluo4 AM in order to measure the effect of GABA-A or GATs antagonist bicuculline or nipecotic acid on spontaneous calcium oscillations, as well as on GABA application evoked responses. Results: Our results showed that α1 (3.18%), α2 (10.4%) and β1 (not detected) subunits were not predominantly expressed in astrocytes of white matter at P8. However, GAD67 co-localized with 54% of GFAP+ cells, suggesting that a fraction of astrocytes could synthesize GABA. Moreover, calcium imaging experiments showed that white matter cells responded to GABA. This response was antagonized by bicuculline suggesting functional expression of GABA-A receptors. Conclusions: Together these results suggest that GABA is synthesized by half astrocytes in white matter at P8 and that GABA could be released locally to activate GABA-A receptors that are also expressed in cells of the white matter of the cerebellum, during early postnatal development. (D) Acknowledgements: We thank the technical support of E. N. Hernández-Ríos, A, Castilla, L. Casanova, A. E. Espino & M. García-Servín. F.E. Labrada-Moncada is a CONACyT (640190) scholarship holder. This work was supported by PAPIIT-UNAM grants (IN201913 e IN201915) to A. Martínez-Torres and D. Reyes-Haro. 48. PROLACTIN PROTECTS AGAINST JOINT INFLAMMATION AND BONE LOSS IN ARTHRITIS Ledesma-Colunga MG, Adán N, Ortiz G, Solis-Gutierrez M, López-Barrera F, Martínez de la Escalera G, y Clapp C. Departamento de Neurobiología Celular y Molecular, Instituto de Neurobiología, UNAM Campus Juriquilla, Querétaro, México. Prolactin (PRL) reduces joint inflammation, pannus formation, and bone destruction in rats with polyarticular adjuvant-induced arthritis (AIA). Here, we investigate the mechanism of PRL protection against bone loss in AIA and in monoarticular AIA (MAIA). Joint inflammation and osteoclastogenesis were evaluated in rats with AIA treated with PRL (via osmotic minipumps) and in mice with MAIA that were null (Prlr-/-) or not (Prlr+/+) for the PRL receptor. To help define target cells, synovial fibroblasts isolated from healthy Prlr+/+ mice were treated or not with T-cell-derived cytokines (Cyt: TNFa, IL-1b, and IFNg) with or without PRL. In AIA, PRL treatment reduced joint swelling, lowered joint histochemical accumulation of the osteoclast marker, tartrateresistant acid phosphatase (TRAP), and decreased joint mRNA levels of osteoclasts-associated genes (Trap, Cathepsin K, Mmp9, Rank) and of cytokines with osteoclastogenic activity (Tnfa, Il-1b, Il-6, Rankl). Prlr-/mice with MAIA showed enhanced joint swelling, increased TRAP activity, and elevated expression of Trap, Rankl, and Rank. The expression of the long PRL receptor form increased in arthritic joints, and in joints and cultured synovial fibroblasts treated with Cyt. PRL induced the phosphorylation/activation of Jornadas Académicas, 2016 Martes 27 de Septiembre, Cartel 35 al 67 signal transducer and activator of transcription-3 (STAT3) and inhibited the Cyt-induced expression of Il-1b, Il-6, and Rankl in synovial cultures. The STAT3 inhibitor S31-201 blocked inhibition of Rankl by PRL. PRL protects against bone loss in inflammatory arthritis by inhibiting cytokine-induced activation of RANKL in joints and synoviocytes via its canonical STAT3 signaling pathway. Hyperprolactinemia-inducing drugs are promising therapeutics for preventing bone loss in rheumatoid arthritis. We thank Gabriel Nava, Daniel Mondragón, Antonio Prado, Martín García, and Alejandra Castilla for technical assistance. Research Support: UNAM-PAPIIT Grant IN201315. M.G.L.C is a doctoral student from Programa de Doctorado en Ciencias Biomédicas, Universidad Nacional Autónoma de México (UNAM) receiving fellowship 245828 from CONACYT. (D) 49. ADC MEASUREMENT IN LATERALY MEDULLARY INFARCTION (WALLENBERG SYNDROME) León-Castro LR1, Fourzán-Martínez M1, Rivas-Sánchez LA1, García-Zamudio E1, Nigoche J2, Ortíz-Retana J1, Barragán-Campos HM1. 1.Magnetic Resonance Unit, Institute of Neurobiology, Campus Juriquilla, National Autonomous University of México. Querétaro, Qro., 2.Department of Radiology. Naval Highly Specialized General Hospital, México City, México. BACKGROUND: The stroke of the vertebrobasilar system (VBS) represents 20% of ischemic vascular events. When the territory of the posterior inferior cerebellar artery (PICA) is affected, lateral medullary infarction (LMI) occurs, typically called Wallenberg syndrome; it accounts for 2-7% of strokes of VBS. Given the diversity of symptoms that causes, it is a difficult disease to diagnose. The reference exam to evaluate cerebral blood flow is digital subtraction angiography (DSA); however, it is an invasive method. Magnetic resonance imaging (MRI) is a noninvasive study and the sequence of diffusion (DWI) can detect early ischemic changes, after 20 minutes of ischemia onset, it also allows to locate and determine the extent of the affected parenchyma. Measurement of the apparent diffusion coefficient (ADC) is a semiquantitative parameter that confirms or rule out the presence of infarction, although the diffusion sequence (DWI) has restriction signal. OBJECTIVE: To measure the ADC values in patients with LMI and compare their values with the contralateral healthy tissue. MATERIALS AND METHODS: The database of Unit Magnetic Resonance Unit of studies carried out from January 2010 to July 2016 was revised to include cases diagnosed by MRI with LMI. The images were acquired in two resonators of 3.0 T (Phillips Achieva TX and General Electric Discovery 750 MR). DWI sequence with b value of 1000 was used to look after LMI, then ADC value measurement of the infarcted area and the contralateral area was performed in the same patient. Two groups were identified: a) infarction and b) healthy tissue. Eleven patients, 5 female (45.5%) and 6 males (54.5%), were included. A descriptive statistic was performed and infarction and healthy tissue were analyzed with U-Mann-Whitney test. RESULTS: In the restriction areas observed in DWI, ADC values were measured; the infarction tissue has a median of 0.54X10-3 mm2/s, interquartile range 0.41-1.0X10-3 mm2/seg; the healthy tissue has a median of 0.24X103 mm2/seg, interquartile range 0.19-0.56X10-3 mm2/seg. The U-Mann-Whitney test has a statistical significance of p<0.05. CONCLUSION: ADC measurement allows to confirm or rule out LMI in patients with the clinical suspicion of Wallenberg syndrome. It also serves to eliminate other diseases that showed restriction in DWI; for example, neoplasm, pontine myelinolysis, acute disseminated encephalomyelitis, multiple sclerosis and diffuse axonal injury. (L) Jornadas Académicas, 2016 Martes 27 de Septiembre, Cartel 35 al 67 50. ENDOVASCULAR CAROTID STENTING IN A PATIENT WITH PREVIOUS STROKE, ISCHEMIC HEART DISEASE, AND SEVERE AORTIC VALVE STENOSIS Lona-Pérez OA1, Balderrama-Bañares J2, Martínez-Reséndiz JA3, Yáñez-LedesmaM4, Jiménez-Zarazúa O5, Vargas-Jiménez MA6, Galeana-Juárez C6, Asensio-Lafuente E7, Barinagarrementeria-Aldatz F8, Barragán Campos H.M9,10. 1.2nd year student of the Faculty of Medicine at the University Autonomous of Querétaro, Qro., 2. Endovascular Neurological Therapy Department, Neurology and Neurosurgery National Institute “Dr. Manuel Velasco Suarez”, México City, México., 3. Department of Anesthesiology, Querétaro General Hospital, SESEQ, Querétaro, Qro., 4. Department of Anesthesiology, León Angeles Hospital, Gto., 5. Internal Medicine Department, León General Hospital, Gto., 6. Coordination of Clinical Rotation, Faculty of Medicine at the University Autonomous of Querétaro, Qro., 7. Cardiology-Electrophysiology , Hospital H+, Querétaro, Qro., 8. Neurologist, Permanent Member of National Academy of Medicine of Mexico, Hospital H+, Querétaro, Qro., 9. Magnetic Resonance Unit, Institute of Neurobiology, Campus Juriquilla, National Autonomous University of México, Querétaro, Qro., 10. Radiology Department. Querétaro General Hospital, SESEQ, Querétaro, Qro OBJECTIVE: We present a case report of a 74-year-old feminine patient who suffered from right superior gyrus stroke, ischemic heart disease, and severe valve aortic stenosis, in whom it was needed to identify which problem had to be treated first. Family antecedent of breast, pancreas, and prostate cancer in first order relatives; smoking 5 packages/year during >20 years, occasional alcoholism, right inguinal hernioplasty, hypertension and dyslipidemia of 3 years of evolution, under treatment. She presented angor pectoris at rest, lasted 3 minute long and has spontaneous recovery, 7 days later she had brain stroke at superior right frontal gyrus, developed hemiparesis with left crural predominance. MATERIALS & METHODS: Anamnesis, complete physical examination, laboratory, as well as heart and brain imaging were performed. Severe aortic valvular stenosis diagnosed by echocardiogram with 0.6 cm2 valvular area, average gradient of 38 mmHg and maximum of 66 mmHg; light mitral stenosis with valvular area of 1.8 cm2, without left atrium dilatation, maximum gradient of 8 mmHg; PSAP 30 mmHg, US Carotid Doppler showed atherosclerotic plaques in the proximal posterior wall of the bulb right internal carotid artery (RICA) that determinates a maximum stenosis of 70%. Aggressive management with antihypertensive (Met", "title": "" }, { "docid": "15fa73633d6ec7539afc91bb1f45098f", "text": "Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.", "title": "" }, { "docid": "feafd64c9f81b07f7f616d2e36e15e0c", "text": "Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job, and is defined by the three dimensions of exhaustion, cynicism, and inefficacy. The past 25 years of research has established the complexity of the construct, and places the individual stress experience within a larger organizational context of people's relation to their work. Recently, the work on burnout has expanded internationally and has led to new conceptual models. The focus on engagement, the positive antithesis of burnout, promises to yield new perspectives on interventions to alleviate burnout. The social focus of burnout, the solid research basis concerning the syndrome, and its specific ties to the work domain make a distinct and valuable contribution to people's health and well-being.", "title": "" }, { "docid": "62773348cf1d2cda966ec63f62f93efb", "text": "In 2003, psychology professor and sex researcher J. Michael Bailey published a book entitled The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism. The book's portrayal of male-to-female (MTF) transsexualism, based on a theory developed by sexologist Ray Blanchard, outraged some transgender activists. They believed the book to be typical of much of the biomedical literature on transsexuality-oppressive in both tone and claims, insulting to their senses of self, and damaging to their public identities. Some saw the book as especially dangerous because it claimed to be based on rigorous science, was published by an imprint of the National Academy of Sciences, and argued that MTF sex changes are motivated primarily by erotic interests and not by the problem of having the gender identity common to one sex in the body of the other. Dissatisfied with the option of merely criticizing the book, a small number of transwomen (particularly Lynn Conway, Andrea James, and Deirdre McCloskey) worked to try to ruin Bailey. Using published and unpublished sources as well as original interviews, this essay traces the history of the backlash against Bailey and his book. It also provides a thorough exegesis of the book's treatment of transsexuality and includes a comprehensive investigation of the merit of the charges made against Bailey that he had behaved unethically, immorally, and illegally in the production of his book. The essay closes with an epilogue that explores what has happened since 2003 to the central ideas and major players in the controversy.", "title": "" }, { "docid": "37c35b782bb80d2324749fc71089c445", "text": "Predicting the stock market is considered to be a very difficult task due to its non-linear and dynamic nature. Our proposed system is designed in such a way that even a layman can use it. It reduces the burden on the user. The user’s job is to give only the recent closing prices of a stock as input and the proposed Recommender system will instruct him when to buy and when to sell if it is profitable or not to buy share in case if it is not profitable to do trading. Using soft computing based techniques is considered to be more suitable for predicting trends in stock market where the data is chaotic and large in number. The soft computing based systems are capable of extracting relevant information from large sets of data by discovering hidden patterns in the data. Here regression trees are used for dimensionality reduction and clustering is done with the help of Self Organizing Maps (SOM). The proposed system is designed to assist stock market investors identify possible profit-making opportunities and also help in developing a better understanding on how to extract the relevant information from stock price data. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2e7d3ee82eec3b46fed7ce1e4aff0249", "text": "A room-temperature wafer bonding process using Al/Ti/Au multi-layers has been demonstrated for integrated reflectors in the ultraviolet (UV) spectral region. A glass wafer with Al (35 nm)/Ti (3 nm)/Au (15 nm) layers and the wafer with Ti (3 nm)/Au (15 nm) layers were successfully bonded in ambient air at room temperature after argon radio-frequency plasma activation process. A thin native aluminum oxide (Al2O3) layer was formed on the Al layer and was found to be effective in avoiding solid-state reactive diffusion between Au and Al. Interface reflectance of more than 80 % was obtained in the UV spectral region (200-400 nm).", "title": "" }, { "docid": "ba974ef3b1724a0b31331f558ed13e8e", "text": "The paper presents a simple and effective sketch-based algorithm for large scale image retrieval. One of the main challenges in image retrieval is to localize a region in an image which would be matched with the query image in contour. To tackle this problem, we use the human perception mechanism to identify two types of regions in one image: the first type of region (the main region) is defined by a weighted center of image features, suggesting that we could retrieve objects in images regardless of their sizes and positions. The second type of region, called region of interests (ROI), is to find the most salient part of an image, and is helpful to retrieve images with objects similar to the query in a complicated scene. So using the two types of regions as candidate regions for feature extraction, our algorithm could increase the retrieval rate dramatically. Besides, to accelerate the retrieval speed, we first extract orientation features and then organize them in a hierarchal way to generate global-to-local features. Based on this characteristic, a hierarchical database index structure could be built which makes it possible to retrieve images on a very large scale image database online. Finally a real-time image retrieval system on 4.5 million database is developed to verify the proposed algorithm. The experiment results show excellent retrieval performance of the proposed algorithm and comparisons with other algorithms are also given.", "title": "" } ]
scidocsrr
a6660031cc211d2d075df01d57a27fa2
Nonresonating Mode Waveguide Filters
[ { "docid": "f762e0937878a406e4200ab72d4d3463", "text": "In this paper, patent pending substrate integrated waveguide (SIW) bandpass filters with moderate fractional bandwidth and improved stopband performance are proposed and demonstrated for a Ka-band satellite ground terminal. Nonphysical cross-coupling provided by higher order modes in the oversized SIW cavities is used to generate the finite transmission zeros far away from the passband for improved stopband performance. Different input/output topologies of the filter are discussed for wide stopband applications. Design considerations including the design approach, filter configuration, and tolerance analysis are addressed. Two fourth-order filters with a passband of 19.2-21.2 GHz are fabricated on a single-layer Rogers RT/Duroid 6002 substrate using linear arrays of metallized via-holes by a standard printed circuit board process. Measured results of the two filters agree very well with simulated results, showing the in-band insertion loss is 0.9 dB or better, and the stopband attenuation in the frequency band of 29.5-30 GHz is better than 50 dB. Measurements over a temperature range of -20degC to +40degC show the passband remains almost unchanged.", "title": "" } ]
[ { "docid": "1f629796e9180c14668e28b83dc30675", "text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.", "title": "" }, { "docid": "22a39638b0c780fce60b7decca4beb19", "text": "We investigate the use of data analytics internally by companies’ finance and accounting functions to prepare the financial statements and to detect fraud, as well as external auditors’ use of data analytics during the financial statement audit. Relying on Socio-technical theory, we examine how each of these groups use data analytics, how that usage affects their interactions with each other (i.e., client-auditor interactions), and the effect of rules and regulations on their use of data analytics. As such, we conducted 58 semi-structured interviews with prominent professionals from 15 companies, eight public accounting firms, and six standard-setters/regulators. Our sample also includes 12 client-auditor pairs (i.e., CFOs and their respective audit partners). Our findings suggest that most companies and their auditors have made changes to the financial reporting and audit processes to incorporate data analytics, with each group most often noting improved financial reporting quality or audit quality as a key benefit. Despite the benefits, both groups reported challenges that come with using data analytics, including finding employees with the right skillset, overcoming the financial cost, dealing with the lack of regulation/standards, and obtaining the data needed for analytics. Further, we leverage our client-auditor pairs to examine the effects of data analytics on the client-auditor relationship. Both parties believe the use of analytics has strengthened their relationship. However, we identify potential future conflicts regarding the audit fee model, as well as regulator concern over the increased business insights auditors are providing their clients as a result of analytics. Our study makes several important contributions to practice and theory, as we are among the first to empirically examine companies’ and audit firms’ recent and significant investment in developing data analytic tools.", "title": "" }, { "docid": "324dd33f8f5d49094c13b401f75b1d84", "text": "As agile is maturing and becoming more widely adopted, it is important that researchers are aware of the challenges faced by practitioners and organisations. We undertook a thematic analysis of 193 agile challenges collected at a series of agile conferences and events during 2013 and 2014. Participants were mainly practitioners and business representatives along with some academics. The challenges were thematically analysed by separate authors, synthesised, and a list of seven themes and 27 sub-themes was agreed. Themes were Organisation, Sustainability, Culture, Teams, Scale, Value and Claims and Limitations. We compare our findings against previous attempts to identify and categorise agile challenges. While most themes have persisted we found a shift of focus towards sustainability, business engagement and transformation, as well as claims and limitations. We identify areas for further research and a need for innovative methods of conveying academic research to industry and industrial problems to academia.", "title": "" }, { "docid": "826e54e8e46dcea0451b53645e679d55", "text": "Microtia is a congenital disease with various degrees of severity, ranging from the presence of rudimentary and malformed vestigial structures to the total absence of the ear (anotia). The complex anatomy of the external ear and the necessity to provide good projection and symmetry make this reconstruction particularly difficult. The aim of this work is to report our surgical technique of microtic ear correction and to analyse the short and long term results. From 2000 to 2013, 210 patients affected by microtia were treated at the Maxillo-Facial Surgery Division, Head and Neck Department, University Hospital of Parma. The patient population consisted of 95 women and 115 men, aged from 7 to 49 years. A total of 225 reconstructions have been performed in two surgical stages basing of Firmin's technique with some modifications and refinements. The first stage consists in fabrication and grafting of a three-dimensional costal cartilage framework. The second stage is performed 5-6 months later: the reconstructed ear is raised up and an additional cartilaginous graft is used to increase its projection. A mastoid fascial flap together with a skin graft are then used to protect the cartilage graft. All reconstructions were performed without any major complication. The results have been considered satisfactory by all patients starting from the first surgical step. Low morbidity, the good results obtained and a high rate of patient satisfaction make our protocol an optimal choice for treatment of microtia. The surgeon's experience and postoperative patient care must be considered as essential aspects of treatment.", "title": "" }, { "docid": "6280266740e1a3da3fd536c134b39cfd", "text": "Despite years of research yielding systems and guidelines to aid visualization design, practitioners still face the challenge of identifying the best visualization for a given dataset and task. One promising approach to circumvent this problem is to leverage perceptual laws to quantitatively evaluate the effectiveness of a visualization design. Following previously established methodologies, we conduct a large scale (n = 1687) crowdsourced experiment to investigate whether the perception of correlation in nine commonly used visualizations can be modeled using Weber's law. The results of this experiment contribute to our understanding of information visualization by establishing that: (1) for all tested visualizations, the precision of correlation judgment could be modeled by Weber's law, (2) correlation judgment precision showed striking variation between negatively and positively correlated data, and (3) Weber models provide a concise means to quantify, compare, and rank the perceptual precision afforded by a visualization.", "title": "" }, { "docid": "7706e41b7c79a8c290d0f4f580fea534", "text": "For various reasons, the cloud computing paradigm is unable to meet certain requirements (e.g. low latency and jitter, context awareness, mobility support) that are crucial for several applications (e.g. vehicular networks, augmented reality). To fulfil these requirements, various paradigms, such as fog computing, mobile edge computing, and mobile cloud computing, have emerged in recent years. While these edge paradigms share several features, most of the existing research is compartmentalised; no synergies have been explored. This is especially true in the field of security, where most analyses focus only on one edge paradigm, while ignoring the others. The main goal of this study is to holistically analyse the security threats, challenges, and mechanisms inherent in all edge paradigms, while highlighting potential synergies and venues of collaboration. In our results, we will show that all edge paradigms should consider the advances in other paradigms.", "title": "" }, { "docid": "9f82843a9f3434ada3f60d09604b0afe", "text": "The performance of pattern classifiers depends on the separability of the classes in the feature space - a property related to the quality of the descriptors - and the choice of informative training samples for user labeling - a procedure that usually requires active learning. This work is devoted to improve the quality of the descriptors when samples are superpixels from remote sensing images. We introduce a new scheme for superpixel description based on Bag of visual Words, which includes information from adjacent superpixels, and validate it by using two remote sensing images and several region descriptors as baselines.", "title": "" }, { "docid": "57f4a6ad2e54f84d305ad8ae117bb950", "text": "Using an event-related functional MRI design, we explored the relative roles of dorsal and ventral prefrontal cortex (PFC) regions during specific components (Encoding, Delay, Response) of a working memory task under different memory-load conditions. In a group analysis, effects of increased memory load were observed only in dorsal PFC in the encoding period. Activity was lateralized to the right hemisphere in the high but not the low memory-load condition. Individual analyses revealed variability in activation patterns across subjects. Regression analyses indicated that one source of variability was subjects' memory retrieval rate. It was observed that dorsal PFC plays a differentially greater role in information retrieval for slower subjects, possibly because of inefficient retrieval processes or a reduced quality of mnemonic representations. This study supports the idea that dorsal and ventral PFC play different roles in component processes of working memory.", "title": "" }, { "docid": "a3d10348d5f6e51fefb3f642098be32e", "text": "We propose a Convolutional Neural Network (CNN) based algorithm – StuffNet – for object detection. In addition to the standard convolutional features trained for region proposal and object detection [33], StuffNet uses convolutional features trained for segmentation of objects and 'stuff' (amorphous categories such as ground and water). Through experiments on Pascal VOC 2010, we show the importance of features learnt from stuff segmentation for improving object detection performance. StuffNet improves performance from 18.8% mAP to 23.9% mAP for small objects. We also devise a method to train StuffNet on datasets that do not have stuff segmentation labels. Through experiments on Pascal VOC 2007 and 2012, we demonstrate the effectiveness of this method and show that StuffNet also significantly improves object detection performance on such datasets.", "title": "" }, { "docid": "09a8aee1ff3315562c73e5176a870c37", "text": "In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition.", "title": "" }, { "docid": "9a5001d80a15b505e5a1aa69f6a584df", "text": "We survey the optical character recognition (OCR) literature with reference to the Urdu-like cursive scripts. In particular, the Urdu, Pushto, and Sindhi languages are discussed, with the emphasis being on the Nasta'liq and Naskh scripts. Before detaining the OCR works, the peculiarities of the Urdu-like scripts are outlined, which are followed by the presentation of the available text image databases. For the sake of clarity, the various attempts are grouped into three parts, namely: (a) printed, (b) handwritten, and (c) online character recognition. Within each part, the works are analyzed par rapport a typical OCR pipeline with an emphasis on the preprocessing, segmentation, feature extraction, classification, and", "title": "" }, { "docid": "9c61e4971829a799b6e979f1b6d69387", "text": "This work examines humanoid social robots in Japan and the North America with a view to comparing and contrasting the projects cross culturally. In North America, I look at the work of Cynthia Breazeal at the Massachusetts Institute of Technology and her sociable robot project: Kismet. In Japan, at the Osaka University, I consider the project of Hiroshi Ishiguro: Repliée-Q2. I first distinguish between utilitarian and affective social robots. Then, drawing on published works of Breazeal and Ishiguro I examine the proposed vision of each project. Next, I examine specific characteristics (embodied and social intelligence, morphology and aesthetics, and moral equivalence) of Kismet and Repliée with a view to comparing the underlying concepts associated with each. These features are in turn connected to the societal preconditions of robots generally. Specifically, the role that history of robots, theology/spirituality, and popular culture plays in the reception and attitude toward robots is considered.", "title": "" }, { "docid": "560cadfecdf5207851d333b4a122a06d", "text": "Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].", "title": "" }, { "docid": "f11ee9f354936eefa539d9aa518ac6b1", "text": "This paper presents a modified priority based probe algorithm for deadlock detection and resolution in distributed database systems. The original priority based probe algorithm was presented by Sinha and Natarajan based on work by Chandy, Misra, and Haas. Various examples are used to show that the original priority based algorithm either fails to detect deadlocks or reports deadlocks which do not exist in many situations. A modified algorithm which eliminates these problems is proposed. This algorithm has been tested through simulation and appears to be error free. Finally, the performance of the modified algorithm is briefly discussed.", "title": "" }, { "docid": "98efa74b25284d0ce22038811f9e09e5", "text": "Automatic analysis of malicious binaries is necessary in order to scale with the rapid development and recovery of malware found in the wild. The results of automatic analysis are useful for creating defense systems and understanding the current capabilities of attackers. We propose an approach for automatic dissection of malicious binaries which can answer fundamental questions such as what behavior they exhibit, what are the relationships between their inputs and outputs, and how an attacker may be using the binary. We implement our approach in a system called BitScope. At the core of BitScope is a system which allows us to execute binaries with symbolic inputs. Executing with symbolic inputs allows us to reason about code paths without constraining the analysis to a particular input value. We implement 5 analysis using BitScope, and demonstrate that the analysis can rapidly analyze important properties such as what behaviors the malicious binaries exhibit. For example, BitScope uncovers all commands in typical DDoS zombies and botnet programs, and uncovers significant behavior in just minutes. This work was supported in part by CyLab at Carnegie Mellon under grant DAAD19-02-1-0389 from the Army Research Office, the U.S. Army Research Office under the Cyber-TA Research Grant No. W911NF-06-1-0316, the ITA (International Technology Alliance), CCF-0424422, National Science Foundation Grant Nos. 0311808, 0433540, 0448452, 0627511, and by the IT R&D program of MIC(Ministry of Information and Communication)/IITA(Institute for Information Technology Advancement) [2005-S-606-02, Next Generation Prediction and Response technology for Computer and Network Security Incidents]. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the ARO, CMU, or the U.S. Government.", "title": "" }, { "docid": "19f88b3cec591c753f57ec3a14ee3e34", "text": "The ability of predicting the future is important for intelligent systems, e.g. autonomous vehicles and robots to plan early and make decisions accordingly. Future scene parsing and optical flow estimation are two key tasks that help agents better understand their environments as the former provides dense semantic information, i.e. what objects will be present and where they will appear, while the latter provides dense motion information, i.e. how the objects will move. In this paper, we propose a novel model to simultaneously predict scene parsing and optical flow in unobserved future video frames. To our best knowledge, this is the first attempt in jointly predicting scene parsing and motion dynamics. In particular, scene parsing enables structured motion prediction by decomposing optical flow into different groups while optical flow estimation brings reliable pixel-wise correspondence to scene parsing. By exploiting this mutually beneficial relationship, our model shows significantly better parsing and motion prediction results when compared to well-established baselines and individual prediction models on the large-scale Cityscapes dataset. In addition, we also demonstrate that our model can be used to predict the steering angle of the vehicles, which further verifies the ability of our model to learn latent representations of scene dynamics.", "title": "" }, { "docid": "473eb35bb5d3a85a4e9f5867aaf3c363", "text": "This paper develops techniques using which humans can be visually recognized. While face recognition would be one approach to this problem, we believe that it may not be always possible to see a person?s face. Our technique is complementary to face recognition, and exploits the intuition that human motion patterns and clothing colors can together encode several bits of information. Treating this information as a \"temporary fingerprint\", it may be feasible to recognize an individual with reasonable consistency, while allowing her to turn off the fingerprint at will.\n One application of visual fingerprints relates to augmented reality, in which an individual looks at other people through her camera-enabled glass (e.g., Google Glass) and views information about them. Another application is in privacy-preserving pictures ? Alice should be able to broadcast her \"temporary fingerprint\" to all cameras in the vicinity along with a privacy preference, saying \"remove me\". If a stranger?s video happens to include Alice, the device can recognize her fingerprint in the video and erase her completely. This paper develops the core visual fingerprinting engine ? InSight ? on the platform of Android smartphones and a backend server running MATLAB and OpenCV. Results from real world experiments show that 12 individuals can be discriminated with 90% accuracy using 6 seconds of video/motion observations. Video based emulation confirms scalability up to 40 users.", "title": "" }, { "docid": "00daf995562570c89901ca73e23dd29d", "text": "As advances in technology make payloads and instruments for space missions smaller, lighter, and more power efficient, a niche market is emerging from the university community to perform rapidly developed, low-cost missions on very small spacecraft - micro, nano, and picosatellites. Among this class of spacecraft, are CubeSats, with a basic form of 10 times 10 times 10 cm, weighing a maximum of 1kg. In order to serve as viable alternative to larger spacecraft, small satellite platforms must provide the end user with access to space and similar functionality to mainstream missions. However, despite recent advances, small satellites have not been able to reach their full potential. Without launch vehicles dedicated to launching small satellites as primary payloads, launch opportunities only exist in the form of co-manifest or secondary payload missions, with launches often subsidized by the government. In addition, power, size, and mass constraints create additional hurdles for small satellites. To date, the primary method of increasing a small satellite's capability has been focused on miniaturization of technology. The CubeSat Program embraces this approach, but has also focused on developing an infrastructure to offset unavoidable limitations caused by the constraints of small satellite missions. The main components of this infrastructure are: an extensive developer community, standards for spacecraft and launch vehicle interfaces, and a network of ground stations. This paper will focus on the CubeSat Program, its history, and the philosophy behind the various elements that make it a practical an enabling alternative for access to space.", "title": "" }, { "docid": "e4da3b7fbbce2345d7772b0674a318d5", "text": "5", "title": "" }, { "docid": "f7edc938429e5f085e355004325b7698", "text": "We present a large scale unified natural language inference (NLI) dataset for providing insight into how well sentence representations capture distinct types of reasoning. We generate a large-scale NLI dataset by recasting 11 existing datasets from 7 different semantic tasks. We use our dataset of approximately half a million context-hypothesis pairs to test how well sentence encoders capture distinct semantic phenomena that are necessary for general language understanding. Some phenomena that we consider are event factuality, named entity recognition, figurative language, gendered anaphora resolution, and sentiment analysis, extending prior work that included semantic roles and frame semantic parsing. Our dataset will be available at https:// www.decomp.net, to grow over time as additional resources are recast.", "title": "" } ]
scidocsrr
62b7a5e34687cc47aea2666ad0809535
Online Rank Elicitation for Plackett-Luce: A Dueling Bandits Approach
[ { "docid": "3797ca0ca77e51b2e77a1f46665edeb8", "text": "This paper proposes a new method for the Karmed dueling bandit problem, a variation on the regular K-armed bandit problem that offers only relative feedback about pairs of arms. Our approach extends the Upper Confidence Bound algorithm to the relative setting by using estimates of the pairwise probabilities to select a promising arm and applying Upper Confidence Bound with the winner as a benchmark. We prove a sharp finite-time regret bound of order O(K log T ) on a very general class of dueling bandit problems that matches a lower bound proven in (Yue et al., 2012). In addition, our empirical results using real data from an information retrieval application show that it greatly outperforms the state of the art.", "title": "" } ]
[ { "docid": "08606c417ec49d44c4d2715ae96c0c43", "text": "Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.", "title": "" }, { "docid": "9a197ed0f86f123a3f003e5fa70acdcf", "text": "CONTEXT\nOveruse injuries of the musculoskeletal system in immature athletes are commonly seen in medical practice.\n\n\nEVIDENCE ACQUISITION\nAn analysis of published clinical, outcome, and biomechanical studies of adolescent epiphyseal and overuse injuries was performed through 2008 to increase recognition and provide treatment recommendations.\n\n\nRESULTS\nAdolescent athletes can sustain physeal and bony stress injuries. Recovery and return to play occur more swiftly if such injuries are diagnosed early and immobilized until the patient is pain-free, typically about 4 weeks for apophyseal and epiphyseal overuse injuries. Certain epiphyseal injuries have prolonged symptoms with delayed treatment, including those involving the bones in the hand, elbow, and foot. If such injuries are missed, prolonged healing and significant restrictions in athletic pursuits may occur.\n\n\nCONCLUSION\nSome of these injuries are common to all weightbearing sports and are therefore widely recognized. Several are common in gymnastics but are rarely seen in other athletes. Early recognition and treatment of these conditions lead to quicker recovery and so may prevent season-ending, even career-ending, events from occurring.", "title": "" }, { "docid": "85678fca24cfa94efcc36570b3f1ef62", "text": "Content-based recommender systems use preference ratings and features that characterize media to model users' interests or information needs for making future recommendations. While previously developed in the music and text domains, we present an initial exploration of content-based recommendation for spoken documents using a corpus of public domain internet audio. Unlike familiar speech technologies of topic identification and spoken document retrieval, our recommendation task requires a more comprehensive notion of document relevance than bags-of-words would supply. Inspired by music recommender systems, we automatically extract a wide variety of content-based features to characterize non-linguistic aspects of the audio such as speaker, language, gender, and environment. To combine these heterogeneous information sources into a single relevance judgement, we evaluate feature, score, and hybrid fusion techniques. Our study provides an essential first exploration of the task and clearly demonstrates the value of a multisource approach over a bag-of-words baseline.", "title": "" }, { "docid": "50227fb4d7f97807baf69230cb2c5c52", "text": "Automatic text summarization aims at producing summary from a document or a set of documents. It has become a widely explored area of research as the need for immediate access to relevant and precise information that can effectively represent huge amount of information. Because relevant information is scattered across a given document, every user is faced with the problem of going through a large amount of information to get to the main gist of a text. This calls for the need to be able to view a smaller portion of large documents without necessarily losing the important aspect of the information contained therein. This paper provides an overview of current technologies, techniques and challenges in automatic text summarization. Consequently, we discuss our efforts at providing an efficient model for compact and concise documents summarization using sentence scoring algorithm and a sentence reduction algorithm. Based on comparison with the wellknown Copernic summarizer and the FreeSummarizer, our system showed that the summarized sentences contain more relevant information such that selected sentences are relevant to the query posed by the user CS Concepts • Computing methodologies ➝Artificial intelligence ➝Natural language processing ➝Information extraction • Information Systems ➝Information Retrieval ➝Retrieval tasks and goals ➝Summarization", "title": "" }, { "docid": "d88ce9c09fdfa0c1ea023ce08183f39b", "text": "The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.\n This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.", "title": "" }, { "docid": "e63a871465311cf896308ca7c46e9391", "text": "Physics, biology, and medicine have well-refined public explanations of their research processes. Even in simplified form, these provide guidance about what counts as “good research” both inside and outside the field. Software engineering has not yet explicitly identified and explained either our research processes or the ways we recognize excellent work. Science and engineering research fields can be characterized in terms of the kinds of questions they find worth investigating, the research methods they adopt, and the criteria by which they evaluate their results. I will present such a characterization for software engineering, showing the diversity of research strategies and the way they shift as ideas mature. Understanding these strategies should help software engineers design research plans and report the results clearly; it should also help explain the character of software engineering research to computer science at large and to other scientists.", "title": "" }, { "docid": "fbcab4ec5e941858efe7e72db910de67", "text": "Previously published guidelines provide comprehensive recommendations for hand hygiene in healthcare facilities. The intent of this document is to highlight practical recommendations in a concise format, update recommendations with the most current scientific evidence, and elucidate topics that warrant clarification or more robust research. Additionally, this document is designed to assist healthcare facilities in implementing hand hygiene adherence improvement programs, including efforts to optimize hand hygiene product use, monitor and report back hand hygiene adherence data, and promote behavior change. This expert guidance document is sponsored by the Society for Healthcare Epidemiology of America (SHEA) and is the product of a collaborative effort led by SHEA, the Infectious Diseases Society of America (IDSA), the American Hospital Association (AHA), the Association for Professionals in Infection Control and Epidemiology (APIC), and The Joint Commission, with major contributions from representatives of a number of organizations and societies with content expertise. The list of endorsing and supporting organizations is presented in the introduction to the 2014 updates.", "title": "" }, { "docid": "8db962a51ab6c9dc6002cb9b9aba35ca", "text": "For some time now machine learning methods have been widely used in perception for autonomous robots. While there have been many results describing the performance of machine learning techniques with regards to their accuracy or convergence rates, relatively little work has been done on developing theoretical performance guarantees about their stability and robustness. As a result, many machine learning techniques are still limited to being used in situations where safety and robustness are not critical for success. One way to overcome this difficulty is by using reachability analysis, which can be used to compute regions of the state space, known as reachable sets, from which the system can be guaranteed to remain safe over some time horizon regardless of the disturbances. In this paper we show how reachability analysis can be combined with machine learning in a scenario in which an aerial robot is attempting to learn the dynamics of a ground vehicle using a camera with a limited field of view. The resulting simulation data shows that by combining these two paradigms, one can create robotic systems that feature the best qualities of each, namely high performance and guaranteed safety.", "title": "" }, { "docid": "1eba4ab4cb228a476987a5d1b32dda6c", "text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.", "title": "" }, { "docid": "0f00e029fa2ae5223dca2049680b4d16", "text": "Many classifications of attacks have been tendered, often in taxonomic form, A common basis of these taxonomies is that they have been framed from the perspective of an attacker - they organize attacks with respect to the attacker's goals, such as privilege elevation from user to root (from the well known Lincoln taxonomy). Taxonomies based on attacker goals are attack-centric; those based on defender goals are defense-centric. Defenders need a way of determining whether or not their detectors will detect a given attack. It is suggested that a defense-centric taxonomy would suit this role more effectively than an attack-centric taxonomy. This paper presents a new, defense-centric attack taxonomy, based on the way that attacks manifest as anomalies in monitored sensor data. Unique manifestations, drawn from 25 attacks, were used to organize the taxonomy, which was validated through exposure to an intrusion-detection system, confirming attack detect ability. The taxonomy's predictive utility was compared against that of a well-known extant attack-centric taxonomy. The defense-centric taxonomy is shown to be a more effective predictor of a detector's ability to detect specific attacks, hence informing a defender that a given detector is competent against an entire class of attacks.", "title": "" }, { "docid": "7e264804d56cab24454c59fe73b51884", "text": "General Douglas MacArthur remarked that \"old soldiers never die; they just fade away.\" For decades, researchers have concluded that visual working memories, like old soldiers, fade away gradually, becoming progressively less precise as they are retained for longer periods of time. However, these conclusions were based on threshold-estimation procedures in which the complete termination of a memory could artifactually produce the appearance of lower precision. Here, we use a recall-based visual working memory paradigm that provides separate measures of the probability that a memory is available and the precision of the memory when it is available. Using this paradigm, we demonstrate that visual working memory representations may be retained for several seconds with little or no loss of precision, but that they may terminate suddenly and completely during this period.", "title": "" }, { "docid": "9e7998cce943fa2b60f4d4773bc51e40", "text": "This paper presents a novel technique to correct for bias in a classical estimator using a learning approach. We apply a learned bias correction to a lidar-only motion estimation pipeline. Our technique trains a Gaussian process (GP) regression model using data with ground truth. The inputs to the model are high-level features derived from the geometry of the point-clouds, and the outputs are the predicted biases between poses computed by the estimator and the ground truth. The predicted biases are applied as a correction to the poses computed by the estimator. Our technique is evaluated on over 50km of lidar data, which includes the KITTI odometry benchmark and lidar datasets collected around the University of Toronto campus. After applying the learned bias correction, we obtained significant improvements to lidar odometry in all datasets tested. We achieved around 10% reduction in errors on all datasets from an already accurate lidar odometry algorithm, at the expense of only less than 1% increase in computational cost at run-time.", "title": "" }, { "docid": "fc27f16c0d61ece9cc07b5b1b2a4d7ee", "text": "Internet of things is a system consists of actuators or sensors or both that provides connectivity to the internet directly or indirectly. Internet of Things (IoT) advances can be used in smart farming to enhance quality of agriculture. Agriculture, the backbone of Indian economy, contributes to the overall economic growth of the country. But our productivity is very less as compared to world standards due to the use of obsolete farming technology, and nowadays people from rural areas migrate to an urban area for other profitable businesses, and they can't focus on agriculture. Innovation in farming is not new but IoT is set to push smart farming to next level Internet of things is a system consists of actuators or sensors or both provides connectivity to the internet directly or indirectly. This paper includes various features like detection of leaf disease, server based remote monitoring system, Humidity and temperature sensing, Soil the Moisture Sensing etc. It makes use of sensors networks for measurement of moisture, temperature, and humidity instead of manual check. Various Sensors are deployed in various locations of farms, to control all these sensors it has been used one controller called Raspberry PI (RPI). Leaf disease can be detected camera interfacing with RPI. Immediate status of a farm like a leaf disease and other environmental factors affecting crop like humidity, temperature and moisture is send using WIFI Server through RPI to the farmers. The paper presents the study of IOT techniques to engross the use of technology in Agriculture.", "title": "" }, { "docid": "1cba225a1f9de1576a5fdfb16c101bff", "text": "Electromagnetic trackers have many favorable characteristics but are notorious for their sensitivity to magnetic field distortions resulting from metal and electronic equipment in the environment. We categorize existing tracker calibration methods and present an improved technique for reducing the static position and orientation errors that are inherent to these devices. A quaternion-based formulation provides a simple and fast computational framework for representing orientation errors. Our experimental apparatus consists of a 6-DOF mobile platform and an optical position measurement system, allowing the collection of full-pose data at nearly arbitrary orientations of the receiver. A polynomial correction technique is applied and evaluated using a Polhemus Fastrak resulting in a substantial improvement of tracking accuracy. Finally, we apply advanced visualization algorithms to give new insight into the nature of the magnetic distortion field.", "title": "" }, { "docid": "f53a2ca0fda368d0e90cbb38076658af", "text": "RNAi therapeutics is a powerful tool for treating diseases by sequence-specific targeting of genes using siRNA. Since its discovery, the need for a safe and efficient delivery system for siRNA has increased. Here, we have developed and characterized a delivery platform for siRNA based on the natural polysaccharide starch in an attempt to address unresolved delivery challenges of RNAi. Modified potato starch (Q-starch) was successfully obtained by substitution with quaternary reagent, providing Q-starch with cationic properties. The results indicate that Q-starch was able to bind siRNA by self-assembly formation of complexes. For efficient and potent gene silencing we monitored the physical characteristics of the formed nanoparticles at increasing N/P molar ratios. The minimum ratio for complete entrapment of siRNA was 2. The resulting complexes, which were characterized by a small diameter (~30 nm) and positive surface charge, were able to protect siRNA from enzymatic degradation. Q-starch/siRNA complexes efficiently induced P-glycoprotein (P-gp) gene silencing in the human ovarian adenocarcinoma cell line, NCI-ADR/Res (NAR), over expressing the targeted gene and presenting low toxicity. Additionally, Q-starch-based complexes showed high cellular uptake during a 24-hour study, which also suggested that intracellular siRNA delivery barriers governed the kinetics of siRNA transfection. In this study, we have devised a promising siRNA delivery vector based on a starch derivative for efficient and safe RNAi application.", "title": "" }, { "docid": "ee997fc4bf329ef2918d5dbe021b3be3", "text": "This study examines the potential link of Facebook group participation with viral advertising responses. The results suggest that college-aged Facebook group members engage in higher levels of self-disclosure and maintain more favorable attitudes toward social media and advertising in general than do nongroup members. However, Facebook group participation does not exert an influence on users' viral advertising pass-on behaviors. The results also identify variations in predictors of passon behaviors between group members and nonmembers. These findings have theoretical and managerial implications for viral advertising on Facebook.", "title": "" }, { "docid": "75177326b8408f755100bf86e1f8bd90", "text": "We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.", "title": "" }, { "docid": "66d5e414e54c657c026fe0e7537c94ee", "text": "A mode-reconfigurable Butterworth bandpass filter, which can be switched between operating as a single-mode-dual-band (SMDB) and a dual-mode-single-band (DMSB) filter is presented. The filter is realized using a substrate integrated waveguide in a square cuboid geometry. Switching is enabled by using empty vias for the SMDB and liquid metal filled vias for the DMSB. The first two modes of the SMDB resonate 3 GHz apart, whereas the first two modes of the DMSB are degenerate and resonate only at the higher frequency. This is due to mode shifting of the first frequency band to the second frequency band. Measurements confirm the liquid-metal reconfiguration between the two operating modes.", "title": "" }, { "docid": "2b7c7162dbebc58958ea6f43ee7faf7b", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Building on self-determination theory, this study presents a model of intrinsic motivation and engagement as \" active ingredients \" in garden-based education. The model was used to create reliable and valid measures of key constructs, and to guide the empirical exploration of motivational processes in garden-based learning. Teacher-and student-reports of garden engagement, administered to 310 middle school students, demonstrated multidimensional structures, good measurement properties, convergent validity, and the expected correlations with self-perceptions in the garden, garden learning , achievement, and engagement in science and school. Exploratory path analyses, calculated using multiple regression, provided initial support for the self-determination model of motivation: students' perceived autonomy, competence, and intrinsic motivation uniquely predicted their engagement in the garden, which in turn, predicted learning in the gardens and achievement in school. School gardens are flourishing. In the United States, they number in the tens of thousands, with 4,000 in California alone (California Department of Education, 2010). Goals for school gardens are as unique as the schools themselves, but in general they target four student outcomes: a) science learning and school achievement; b) ecological and environmental awareness and responsible behaviors; such as recycling and composting; d) knowledge about food systems 1. The Learning-Gardens Educational Assessment Group (or LEAG) is an interdisciplinary group of faculty and students from the Department of Psychology and the Graduate School of Education at Portland State University and the leadership of Lane Middle School of Portland Public Schools organized around a garden-based education program, the 17 and nutrition, and healthy eating, especially consumption of fresh fruits and vegetables; and d) positive youth development (Ratcliffe, Goldberg, Rogers, & Merrigan, 2010). Evidence about the beneficial effects of garden programs comes from qualitative and quantitative research and case studies from multiple disciplines …", "title": "" }, { "docid": "08d8e372c5ae4eef9848552ee87fbd64", "text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …", "title": "" } ]
scidocsrr
42906a4d189fbc46361e9f3028296e18
BotFinder: finding bots in network traffic without deep packet inspection
[ { "docid": "795f59c0658a56aa68a9271d591c81a6", "text": "We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.", "title": "" } ]
[ { "docid": "fcef7ce729a08a5b8c6ed1d0f2d53633", "text": "Community question-answering (CQA) systems, such as Yahoo! Answers or Stack Overflow, belong to a prominent group of successful and popular Web 2.0 applications, which are used every day by millions of users to find an answer on complex, subjective, or context-dependent questions. In order to obtain answers effectively, CQA systems should optimally harness collective intelligence of the whole online community, which will be impossible without appropriate collaboration support provided by information technologies. Therefore, CQA became an interesting and promising subject of research in computer science and now we can gather the results of 10 years of research. Nevertheless, in spite of the increasing number of publications emerging each year, so far the research on CQA systems has missed a comprehensive state-of-the-art survey. We attempt to fill this gap by a review of 265 articles published between 2005 and 2014, which were selected from major conferences and journals. According to this evaluation, at first we propose a framework that defines descriptive attributes of CQA approaches. Second, we introduce a classification of all approaches with respect to problems they are aimed to solve. The classification is consequently employed in a review of a significant number of representative approaches, which are described by means of attributes from the descriptive framework. As a part of the survey, we also depict the current trends as well as highlight the areas that require further attention from the research community.", "title": "" }, { "docid": "4d791fa53f7ed8660df26cd4dbe9063a", "text": "The Internet is a powerful political instrument, wh ich is increasingly employed by terrorists to forward their goals. The fiv most prominent contemporary terrorist uses of the Net are information provision , fi ancing, networking, recruitment, and information gathering. This article describes a nd explains each of these uses and follows up with examples. The final section of the paper describes the responses of government, law enforcement, intelligence agencies, and others to the terrorism-Internet nexus. There is a particular emphasis within the te xt on the UK experience, although examples from other jurisdictions are also employed . ___________________________________________________________________ “Terrorists use the Internet just like everybody el se” Richard Clarke (2004) 1 ___________________________________________________________________", "title": "" }, { "docid": "020799a5f143063b843aaf067f52cf29", "text": "In this paper we propose a novel entity annotator for texts which hinges on TagME's algorithmic technology, currently the best one available. The novelty is twofold: from the one hand, we have engineered the software in order to be modular and more efficient; from the other hand, we have improved the annotation pipeline by re-designing all of its three main modules: spotting, disambiguation and pruning. In particular, the re-design has involved the detailed inspection of the performance of these modules by developing new algorithms which have been in turn tested over all publicly available datasets (i.e. AIDA, IITB, MSN, AQUAINT, and the one of the ERD Challenge). This extensive experimentation allowed us to derive the best combination which achieved on the ERD development dataset an F1 score of 74.8%, which turned to be 67.2% F1 for the test dataset. This final result was due to an impressive precision equal to 87.6%, but very low recall 54.5%. With respect to classic TagME on the development dataset the improvement ranged from 1% to 9% on the D2W benchmark, depending on the disambiguation algorithm being used. As a side result, the final software can be interpreted as a flexible library of several parsing/disambiguation and pruning modules that can be used to build up new and more sophisticated entity annotators. We plan to release our library to the public as an open-source project.", "title": "" }, { "docid": "35dfea171049ef5efbca6f3297ef27d2", "text": "BACKGROUND CONTEXT\nSegmental instrumentation systems have replaced nonsegmental systems in all areas of spine surgery. Construct patterns for fracture stabilization have been adapted from deformity experience and from biomechanical studies using nonsegmental systems. Few studies have been completed to validate the use of these implants in trauma or to assess their relative strengths and weaknesses.\n\n\nPURPOSE\nTo substantiate the safety and efficacy of segmental spinal instrumentation used to treat patients with unstable spinal fractures and to identify successful construct strategies and potential pitfalls.\n\n\nSTUDY DESIGN\nA prospective, longitudinal single cohort study of patients treated with segmental instrumentation for fractures of the spine. Minimum 2-year follow-up.\n\n\nPATIENT SAMPLE\nSeventy-five consecutive patients with unstable fractures of the thoracic, thoracolumbar and lumbar vertebrae, admitted to a level 1 trauma center. All patients sustained high-energy injuries: fifty-five (79%) were injured in motor vehicle accidents, 27 (38%) sustained two or more major additional injuries and 39 (56%) had neurological injuries.\n\n\nOUTCOME MEASURES\nPerioperative morbidity and mortality, blood loss, surgical time; postoperative recovery, neurological recovery, complications, thromboembolic and pulmonary disease; long-term outcome measures of fusion, sagittal spinal alignment, construct survival, patient pain and function measures, and return to work and activity.\n\n\nMETHODS\nA longitudinal, prospective study of surgical outcome after segmental spinal instrumentation. Multifactorial assessment was carried out at prescribed intervals to a mean follow-up of 5 years (range, 2 to 8 years) from the time of surgery. Seventy patients were included in the final analysis. There were 17 thoracic, 36 thoracolumbar and 17 lumbar fractures.\n\n\nRESULTS\nAt 52 months mean follow-up, 57 of 62 patients (92%) had solid fusion with acceptable spinal alignment. Perioperative complications and mortality were less than expected, based on historical controls matched for injury severity. Rod and hook constructs had 97% good to excellent functional results, with no hardware complications. Six of 11 (55%) patients with short-segment pedicle instrumentation (SSPI) with no anterior column reconstruction had greater than 10 degrees of sagittal collapse during the fracture healing period. Twenty six of 36 neurologically injured patients (72%) experienced (mean) 1.5 Frankel grades recovery after decompression and stabilization. Residual neurological deficit determined return to work: 43 patients (70%) returned to work, 33 without restrictions, 10 with limitations. Five other patients (8%) were fit but unemployed. Fifteen percent experienced some form of hardware failure, but only three (5%) required revision. Hardware complications and fair to poor outcomes occurred after pedicle instrumentation without anterior reconstruction. Patients with anterior reconstruction had 100% construct survival, no sagittal deformity, and less pain.\n\n\nCONCLUSION\nSegmental instrumentation allowed immediate mobilization of these severely injured patients, eliminating thromboembolic and pulmonary complications, and reducing overall morbidity and mortality. Segmental instrumentation produced a high rate of fusion with no rod breakage or hook failure. Pedicle screw constructs had a high rate of screw complications associated with anterior column insufficiency, but revision was not always necessary. Eighty percent of these severely injured patients were capable of returning to full-time employment, and 70% did so.", "title": "" }, { "docid": "52a5f4c15c1992602b8fe21270582cc6", "text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.", "title": "" }, { "docid": "e464e7335a4bc1af76d57b158dfcf435", "text": "An elementary way of using language is to refer to objects. Often, these objects are physically present in the shared environment and reference is done via mention of perceivable properties of the objects. This is a type of language use that is modelled well neither by logical semantics nor by distributional semantics, the former focusing on inferential relations between expressed propositions, the latter on similarity relations between words or phrases. We present an account of word and phrase meaning that is perceptually grounded, trainable, compositional, and ‘dialogueplausible’ in that it computes meanings word-by-word. We show that the approach performs well (with an accuracy of 65% on a 1-out-of-32 reference resolution task) on direct descriptions and target/landmark descriptions, even when trained with less than 800 training examples and automatically transcribed utterances.", "title": "" }, { "docid": "a3ebadf449537b5df8de3c5ab96c74cb", "text": "Do conglomerate firms have the ability to allocate resources efficiently across business segments? We address this question by comparing the performance of firms that follow passive benchmark strategies in their capital allocation process to those that actively deviate from those benchmarks. Using three measures of capital allocation style to capture various aspects of activeness, we show that active firms have a lower average industry-adjusted profitability than passive firms. This result is robust to controlling for potential endogeneity using matching analysis and regression analysis with firm fixed effects. Moreover, active firms obtain lower valuation and lower excess stock returns in subsequent periods. Our findings suggest that, on average, conglomerate firms that actively allocate resources across their business segments do not do so efficiently and that the stock market does not fully incorporate information revealed in the internal capital allocation process. Guedj and Huang are from the McCombs School of Business, University of Texas at Austin. Guedj: guedj@mail.utexas.edu and (512) 471-5781. Huang: jennifer.huang@mccombs.utexas.edu and (512) 232-9375. Sulaeman is from the Cox School of Business, Southern Methodist University, sulaeman@smu.edu and (214) 768-8284. The authors thank Alexander Butler, Amar Gande, Mark Leary, Darius Miller, Maureen O’Hara, Owen Lamont, Gordon Phillips, Mike Roberts, Oleg Rytchkov, Gideon Saar, Zacharias Sautner, Clemens Sialm, Rex Thompson, Sheridan Titman, Yuhai Xuan, participants at the Financial Research Association meeting and seminars at Cornell University, Southern Methodist University, the University of Texas at Austin, and the University of Texas at Dallas for their helpful comments.", "title": "" }, { "docid": "7012a0cb7b0270b43d733cf96ab33bac", "text": "The evolution of Cloud computing makes the major changes in computing world as with the assistance of basic cloud computing service models like SaaS, PaaS, and IaaS an organization achieves their business goal with minimum effort as compared to traditional computing environment. On the other hand security of the data in the cloud database server is the key area of concern in the acceptance of cloud. It requires a very high degree of privacy and authentication. To protect the data in cloud database server cryptography is one of the important methods. Cryptography provides various symmetric and asymmetric algorithms to secure the data. This paper presents the symmetric cryptographic algorithm named as AES (Advanced Encryption Standard). It is based on several substitutions, permutation and transformation.", "title": "" }, { "docid": "da237e14a3a9f6552fc520812073ee6c", "text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.", "title": "" }, { "docid": "474572cef9f1beb875d3ae012e06160f", "text": "Published attacks against smartphones have concentrated on software running on the application processor. With numerous countermeasures like ASLR, DEP and code signing being deployed by operating system vendors, practical exploitation of memory corruptions on this processor has become a time-consuming endeavor. At the same time, the cellular baseband stack of most smartphones runs on a separate processor and is significantly less hardened, if at all. In this paper we demonstrate the risk of remotely exploitable memory corruptions in cellular baseband stacks. We analyze two widely deployed baseband stacks and give exemplary cases of memory corruptions that can be leveraged to inject and execute arbitrary code on the baseband processor. The vulnerabilities can be triggered over the air interface using a rogue GSM base station, for instance using OpenBTS together with a USRP software defined radio.", "title": "" }, { "docid": "64b4527cbb3c14f842d0087681e1c517", "text": "The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.", "title": "" }, { "docid": "0ff9e3b699e5cb5c098cdcc7d7ed78b6", "text": "Malwares are becoming persistent by creating fulledged variants of the same or different family. Malwares belonging to same family share same characteristics in their functionality of spreading infections into the victim computer. These similar characteristics among malware families can be taken as a measure for creating a solution that can help in the detection of the malware belonging to particular family. In our approach we have taken the advantage of detecting these malware families by creating the database of these characteristics in the form of n-grams of API sequences. We use various similarity score methods and also extract multiple API sequences to analyze malware effectively.", "title": "" }, { "docid": "b4d4a94e29298db46627ac42007d0713", "text": "Bitextor is a free/open-source application for harvesting translation memories from multilingual websites. It downloads all the HTML files in a website, preprocesses them into a coherent format and, finally, applies a set of heuristics to select pairs of files which are candidates to contain the same text in two different languages (bitexts). From these parallel texts, translation memories are generated in TMX format using the library LibTagAligner, which uses the HTML tags and the length of text chunks to perform the alignment.", "title": "" }, { "docid": "7350c0433fe1330803403e6aa03a2f26", "text": "An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.", "title": "" }, { "docid": "2dd8b7004f45ae374a72e2c7d40b0892", "text": "In this letter, a multifeed tightly coupled patch array antenna capable of broadband operation is analyzed and designed. First, an antenna array composed of infinite elements with each element excited by a feed is proposed. To produce specific polarized radiation efficiently, a new patch element is proposed, and its characteristics are studied based on a 2-port network model. Full-wave simulation results show that the infinite antenna array exhibits both a high efficiency and desirable radiation pattern in a wide frequency band (10 dB bandwidth) from 1.91 to 5.35 GHz (94.8%). Second, to validate its outstanding performance, a realistic finite 4 × 4 antenna prototype is designed, fabricated, and measured in our laboratory. The experimental results agree well with simulated ones, where the frequency bandwidth (VSWR < 2) is from 2.5 to 3.8 GHz (41.3%). The inherent compact size, light weight, broad bandwidth, and good radiation characteristics make this array antenna a promising candidate for future communication and advanced sensing systems.", "title": "" }, { "docid": "bfeff1e1ef24d0cb92d1844188f87cc8", "text": "While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users’ profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users’ attributes based on their tweets.1", "title": "" }, { "docid": "6059ad37cced50133792086a5c95f050", "text": "The paper discusses and evaluates the effects of an information security awareness programme. The programme emphasised employee participation, dialogue and collective reflection in groups. The intervention consisted of small-sized workshops aimed at improving information security awareness and behaviour. An experimental research design consisting of one survey before and two after the intervention was used to evaluate whether the intended changes occurred. Statistical analyses revealed that the intervention was powerful enough to significantly change a broad range of awareness and behaviour indicators among the intervention participants. In the control group, awareness and behaviour remained by and large unchanged during the period of the study. Unlike the approach taken by the intervention studied in this paper, mainstream information security awareness measures are typically top-down, and seek to bring about changes at the individual level by means of an expert-based approach directed at a large population, e.g. through formal presentations, e-mail messages, leaflets and posters. This study demonstrates that local employee participation, collective reflection and group processes produce changes in short-term information security awareness and behaviour. a 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "55ec669a67b88ff0b6b88f1fa6408df9", "text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.", "title": "" }, { "docid": "fa8cfa2eba1ba869d02a23efb122fe0c", "text": "With the advent of digital optical scanners, a lot of paper-based books, textbooks, magazines, articles, and documents are being transformed into an electronic version that can be manipulated by a computer. For this purpose, OCR, short for Optical Character Recognition was developed to translate scanned graphical text into editable computer text. Unfortunately, OCR is still imperfect as it occasionally mis-recognizes letters and falsely identifies scanned text, leading to misspellings and linguistics errors in the OCR output text. This paper proposes a post-processing context-based error correction algorithm for detecting and correcting OCR non-word and real-word errors. The proposed algorithm is based on Google‟s online spelling suggestion which harnesses an internal database containing a huge collection of terms and word sequences gathered from all over the web, convenient to suggest possible replacements for words that have been misspelled during the OCR process. Experiments carried out revealed a significant improvement in OCR error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized and executed over multiprocessing platforms.", "title": "" }, { "docid": "a9d92ae26a2e1023402c2c7c760b8074", "text": "We examined the psychometric properties of the Big Five personality traits assessed through social networking profiles in 2 studies consisting of 274 and 244 social networking website (SNW) users. First, SNW ratings demonstrated sufficient interrater reliability and internal consistency. Second, ratings via SNWs demonstrated convergent validity with self-ratings of the Big Five traits. Third, SNW ratings correlated with job performance, hirability, and academic performance criteria; and the magnitude of these correlations was generally larger than for self-ratings. Finally, SNW ratings accounted for significant variance in the criterion measures beyond self-ratings of personality and cognitive ability. We suggest that SNWs may provide useful information for potential use in organizational research and practice, taking into consideration various legal and ethical issues.jasp_881 1..30", "title": "" } ]
scidocsrr
eac3bff8572d39167148a0453226b622
EANN: Event Adversarial Neural Networks for Multi-Modal Fake News Detection
[ { "docid": "f47ceeecad0b976369b08c81436855e9", "text": "We propose the first multistage intervention framework that tackles fake news in social networks by combining reinforcement learning with a point process network activity model. The spread of fake news and mitigation events within the network is modeled by a multivariate Hawkes process with additional exogenous control terms. By choosing a feature representation of states, defining mitigation actions and constructing reward functions to measure the effectiveness of mitigation activities, we map the problem of fake news mitigation into the reinforcement learning framework. We develop a policy iteration method unique to the multivariate networked point process, with the goal of optimizing the actions for maximal total reward under budget constraints. Our method shows promising performance in real-time intervention experiments on a Twitter network to mitigate a surrogate fake news campaign, and outperforms alternatives on synthetic datasets.", "title": "" } ]
[ { "docid": "d64b30b463245e7e3b1690a04f1748e2", "text": "Grasping-force optimization of multifingered robotic hands can be formulated as a problem for minimizing an objective function subject to form-closure constraints and balance constraints of external force. This paper presents a novel recurrent neural network for real-time dextrous hand-grasping force optimization. The proposed neural network is shown to be globally convergent to the optimal grasping force. Compared with existing approaches to grasping-force optimization, the proposed neural-network approach has the advantages that the complexity for implementation is reduced, and the solution accuracy is increased, by avoiding the linearization of quadratic friction constraints. Simulation results show that the proposed neural network can achieve optimal grasping force in real time.", "title": "" }, { "docid": "90b56b9168a02e20f42449ffa84f35b6", "text": "Malignant melanomas of the penis and urethra are rare.1–7 These lesions typically appear in an older age group.1–3,5–8 Presentation is usually late,1,6 with thick primary lesions9 and a high incidence of regional metastatic disease.1,5,8 Little is known about risk factors for or the pathogenesis of this disease. Furthermore, there is lack of consensus as to the extent of treatment that is indicated. The authors present a case of multifocal melanoma of the glans penis that has not been previously described. A review of the literature with emphasis on the pathogenesis and treatment of melanoma of the penis and urethra is discussed. Further efforts are needed to identify those at increased risk so that earlier diagnosis, surgical intervention, and an improved prognosis for those afflicted with this disease will be possible in the future.", "title": "" }, { "docid": "8dfc853c0d4256cdec04353982590e58", "text": "Search result diversification has gained momentum as a way to tackle ambiguous queries. An effective approach to this problem is to explicitly model the possible aspects underlying a query, in order to maximise the estimated relevance of the retrieved documents with respect to the different aspects. However, such aspects themselves may represent information needs with rather distinct intents (e.g., informational or navigational). Hence, a diverse ranking could benefit from applying intent-aware retrieval models when estimating the relevance of documents to different aspects. In this paper, we propose to diversify the results retrieved for a given query, by learning the appropriateness of different retrieval models for each of the aspects underlying this query. Thorough experiments within the evaluation framework provided by the diversity task of the TREC 2009 and 2010 Web tracks show that the proposed approach can significantly improve state-of-the-art diversification approaches.", "title": "" }, { "docid": "3a75bf4c982d076fce3b4cdcd560881a", "text": "This project is one of the research topics in Professor William Dally’s group. In this project, we developed a pruning based method to learn both weights and connections for Long Short Term Memory (LSTM). In this method, we discard the unimportant connections in a pretrained LSTM, and make the weight matrix sparse. Then, we retrain the remaining model. After we remaining model is converge, we prune this model again and retrain the remaining model iteratively, until we achieve the desired size of model and performance. This method will save the size of the LSTM as well as prevent overfitting. Our results retrained on NeuralTalk shows that we can discard nearly 90% of the weights without hurting the performance too much. Part of the results in this project will be posted in NIPS 2015.", "title": "" }, { "docid": "8dfc99832483ec3b189ebd04f78e05ab", "text": "The IEEE 1394 “FireWire” interface provides a means for acquiring direct memory access. We discuss how this can be used to perform live memory forensics on a target system. We also present libforensic1394 an open-source software library designed especially for this purpose. Passive and active applications of live memory forensics are analysed. Memory imaging techniques are discussed at length. It is demonstrated how the interface can be used both to dump the memory of a live system and to compromise contemporary operating systems.", "title": "" }, { "docid": "112b9294f4d606a0112fe80742698184", "text": "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their ka ma. A set of nodes, called a bank-set , keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing application.", "title": "" }, { "docid": "5aa0765f09be559b9e0edd89a47992a8", "text": "The study of membrane proteins requires a proper consideration of the specific environment provided by the biomembrane. The compositional complexity of this environment poses great challenges to all experimental and theoretical approaches. In this article a rather simple theoretical concept is discussed for its ability to mimic the biomembrane. The biomembrane is approximated by three mimicry solvents forming individual continuum layers of characteristic physical properties. Several specific structural problems are studied with a focus on the biological significance of such an approach. Our results support the general perception that the biomembrane is crucial for correct positioning and embedding of its constituents. The described model provides a semi-quantitative tool of potential interest to many problems in structural membrane biology.", "title": "" }, { "docid": "7e77adbdb66b24c0a2a4ba22993bd7f7", "text": "This paper provides an overview of research on social media and body image. Correlational studies consistently show that social media usage (particularly Facebook) is associated with body image concerns among young women and men, and longitudinal studies suggest that this association may strengthen over time. Furthermore, appearance comparisons play a role in the relationship between social media and body image. Experimental studies, however, suggest that brief exposure to one’s own Facebook account does not negatively impact young women’s appearance concerns. Further longitudinal and experimental research is needed to determine which aspects of social media are most detrimental to people’s body image concerns. Research is also needed on more diverse samples as well as other social media platforms (e.g., Instagram).", "title": "" }, { "docid": "64c2b9f59a77f03e6633e5804356e9fc", "text": "AbstructWe present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (Le., two extra disks) is based on ReedSolomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error.", "title": "" }, { "docid": "dd8222a589e824b5189194ab697f27d7", "text": "Facial expression recognition has been investigated for many years, and there are two popular models: Action Units (AUs) and the Valence-Arousal space (V-A space) that have been widely used. However, most of the databases for estimating V-A intensity are captured in laboratory settings, and the benchmarks \"in-the-wild\" do not exist. Thus, the First Affect-In-The-Wild Challenge released a database for V-A estimation while the videos were captured in wild condition. In this paper, we propose an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation. The key idea is to apply AUs to estimate the V-A intensity since both AUs and V-A space could be utilized to recognize some emotion categories. Besides, the AU detector is trained based on the convolutional neural network (CNN) for facial attribute recognition. In experiments, we will show the results of the above three tasks to verify the performances of our proposed network framework.", "title": "" }, { "docid": "06ab903f3de4c498e1977d7d0257f8f3", "text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.", "title": "" }, { "docid": "5853bccf3dfd3c861cd29afec0cfce7e", "text": "The performance of each datanode in a heterogeneous Hadoop cluster differs, and the number of slots that can be numbered to simultaneously execute tasks differs. For this reason, Hadoop is susceptible to replica placement problems and data replication problems. Because of this, replication problems and allocation problems occur. These problems can deteriorate the performance of Hadoop. In this paper, we summarize existing research to improve data locality, and design a data replication method to solve replication and allocation problems.", "title": "" }, { "docid": "c52c15b47e67cc9922bd29ec51eb8edc", "text": "The minimum covariance determinant (MCD) method of Rousseeuw (1984) is a highly robust estimator of multivariate location and scatter. Its objective is to nd h observations (out of n) whose covariance matrix has the lowest determinant. Until now applications of the MCD were hampered by the computation time of existing algorithms, which were limited to a few hundred objects in a few dimensions. We discuss two important applications of larger size: one about a production process at Philips with n = 677 objects and p = 9 variables, and a data set from astronomy with n =137,256 objects and p = 27 variables. To deal with such problems we have developed a new algorithm for the MCD, called FAST-MCD. The basic ideas are an inequality involving order statistics and determinants, and techniques which we calìselective iteration' and`nested extensions'. For small data sets FAST-MCD typically nds the exact MCD, whereas for larger data sets it gives more accurate results than existing algorithms and is faster by orders of magnitude. Moreover, FAST-MCD is able to detect an exact t, i.e. a hyperplane containing h or more observations. The new algorithm makes the MCD method available as a routine tool for analyzing multivariate data. We also propose the distance-distance plot (or`D-D plot') which displays MCD-based robust distances versus Mahalanobis distances, and illustrate it with some examples. We wish to thank Doug Hawkins and Jos e Agulll o for making their programs available to us. We also want to dedicate special thanks to Gertjan Otten, Frans Van Dommelen en Herman Veraa for giving us access to the Philips data, and to S.C. Odewahn and his research group at the California Institute of Technology for allowing us to analyze their Digitized Palomar data.", "title": "" }, { "docid": "9d35f9fff4d38b00d212e9aecab76b44", "text": "In wireless communication, antenna miniaturization is a vital issue these days. This paper presents the simulation analysis of small planar antennas using different antenna miniaturization techniques. They have brought to define miniaturization methods by which we can estimate the use of micro strip antennas. Various govt. and private sector organizations made use of these techniques to solve the problem of antenna fabrication in mobiles, satellites, missiles, radar navigational aids and military applications. These techniques are used to reduce the physical size but to increase the bandwidth and efficiency of antenna. Some approaches for antenna miniaturization are introduction of slots, slits, short meandering and novel geometries like fractals or by using higher dielectrics constant. The effect of miniaturization in various antenna parameters like, radiation efficiency, gain and bandwidth are discussed. Finally the paper reports a brief description of miniaturization of antenna by using suitable and efficient methods that includes use of double layered substrate structure in microstrip patch antenna.", "title": "" }, { "docid": "0b0723466d6fc726154befea8a1d7398", "text": "● Volume of pages makes efficient WWW navigation difficult ● Aim: To analyse users' navigation history to generate tools that increase navigational efficiency – ie. Predictive server prefetching ● Provides a mathematical foundation to several concepts", "title": "" }, { "docid": "52f74b832784323b8eb0362b7509b7bd", "text": "Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human–computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a convolutional neural network (CNN) to extract features from the speech, while for the visual modality a deep residual network of 50 layers is used. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, long short-term memory networks are utilized. The system is then trained in an end-to-end fashion where—by also taking advantage of the correlations of each of the streams—we manage to significantly outperform, in terms of concordance correlation coefficient, traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.", "title": "" }, { "docid": "8db6d52ee2778d24c6561b9158806e84", "text": "Surface fuctionalization plays a crucial role in developing efficient nanoparticulate drug-delivery systems by improving their therapeutic efficacy and minimizing adverse effects. Here we propose a simple layer-by-layer self-assembly technique capable of constructing mesoporous silica nanoparticles (MSNs) into a pH-responsive drug delivery system with enhanced efficacy and biocompatibility. In this system, biocompatible polyelectrolyte multilayers of alginate/chitosan were assembled on MSN's surface to achieve pH-responsive nanocarriers. The functionalized MSNs exhibited improved blood compatibility over the bare MSNs in terms of low hemolytic and cytotoxic activity against human red blood cells. As a proof-of-concept, the anticancer drug doxorubicin (DOX) was loaded into nanocarriers to evaluate their use for the pH-responsive drug release both in vitro and in vivo. The DOX release from nanocarriers was pH dependent, and the release rate was much faster at lower pH than that of at higher pH. The in vitro evaluation on HeLa cells showed that the DOX-loaded nanocarriers provided a sustained intracellular DOX release and a prolonged DOX accumulation in the nucleus, thus resulting in a prolonged therapeutic efficacy. In addition, the pharmacokinetic and biodistribution studies in healthy rats showed that DOX-loaded nanocarriers had longer systemic circulation time and slower plasma elimination rate than free DOX. The histological results also revealed that the nanocarriers had good tissue compatibility. Thus, the biocompatible multilayers functionalized MSNs hold the substantial potential to be further developed as effective and safe drug-delivery carriers.", "title": "" }, { "docid": "4460585853808f31923fba735a64743e", "text": "The present study focuses on the relationship between teacher empowerment and teachers’ organizational commitment, professional commitment (PC) and organizational citizenship behavior (OCB). It examines which subscales of teacher empowerment can best predict these outcomes. The data were collected through a questionnaire returned by a sample of 983 teachers in Israeli middle and high schools. Pearson correlations and multiple regression analyses indicated that teachers’ perceptions of their level of empowerment are significantly related to their feelings of commitment to the organization and to the profession, and to their OCBs. Among the six subscales of empowerment, professional growth, status and self-efficacy were significant predictors of organizational and PC, while decisionmaking, self-efficacy, and status were significant predictors of OCB. Practical implications of the study are discussed in relation to teachers, principals and policy-makers. r 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1eef21abdf14dc430b333cac71d4fe07", "text": "The authors have developed an adaptive matched filtering algorithm based upon an artificial neural network (ANN) for QRS detection. They use an ANN adaptive whitening filter to model the lower frequencies of the electrocardiogram (ECG) which are inherently nonlinear and nonstationary. The residual signal which contains mostly higher frequency QRS complex energy is then passed through a linear matched filter to detect the location of the QRS complex. The authors developed an algorithm to adaptively update the matched filter template from the detected QRS complex in the ECG signal itself so that the template can be customized to an individual subject. This ANN whitening filter is very effective at removing the time-varying, nonlinear noise characteristic of ECG signals. The detection rate for a very noisy patient record in the MIT/BIH arrhythmia database is 99.5% with this approach, which compares favorably to the 97.5% obtained using a linear adaptive whitening filter and the 96.5% achieved with a bandpass filtering method.<<ETX>>", "title": "" }, { "docid": "615a24719fe4300ea8971e86014ed8fe", "text": "This paper presents a new code for the analysis of gamma spectra generated by an equipment for continuous measurement of gamma radioactivity in aerosols with paper filter. It is called pGamma and has been developed by the Nuclear Engineering Research Group at the Technical University of Catalonia - Barcelona Tech and by Raditel Serveis i Subministraments Tecnològics, Ltd. The code has been developed to identify the gamma emitters and to determine their activity concentration. It generates alarms depending on the activity of the emitters and elaborates reports. Therefore it includes a library with NORM and artificial emitters of interest. The code is being adapted to the monitors of the Environmental Radiological Surveillance Network of the local Catalan Government in Spain (Generalitat de Catalunya) and is used at three stations of the Network.", "title": "" } ]
scidocsrr
a1dbebda3f8452642a3685fb7c273dc2
White-Box Traceable Ciphertext-Policy Attribute-Based Encryption Supporting Flexible Attributes
[ { "docid": "347c3929efc37dee3230189e576f14ab", "text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.", "title": "" }, { "docid": "32539e8223b95b25f01f81b29a7e1dc1", "text": "In a ciphertext-policy attribute-based encryption (CP-ABE) system, decryption keys are defined over attributes shared by multiple users. Given a decryption key, it may not be always possible to trace to the original key owner. As a decryption privilege could be possessed by multiple users who own the same set of attributes, malicious users might be tempted to leak their decryption privileges to some third parties, for financial gain as an example, without the risk of being caught. This problem severely limits the applications of CP-ABE. Several traceable CP-ABE (T-CP-ABE) systems have been proposed to address this problem, but the expressiveness of policies in those systems is limited where only and gate with wildcard is currently supported. In this paper we propose a new T-CP-ABE system that supports policies expressed in any monotone access structures. Also, the proposed system is as efficient and secure as one of the best (non-traceable) CP-ABE systems currently available, that is, this work adds traceability to an existing expressive, efficient, and secure CP-ABE scheme without weakening its security or setting any particular trade-off on its performance.", "title": "" } ]
[ { "docid": "93347ca2b0e76b442b39ea518eebf551", "text": "For tackling thewell known cold-start user problem inmodel-based recommender systems, one approach is to recommend a few items to a cold-start user and use the feedback to learn a pro€le. Œe learned pro€le can then be used to make good recommendations to the cold user. In the absence of a good initial pro€le, the recommendations are like random probes, but if not chosen judiciously, both bad recommendations and too many recommendations may turn o‚ a user. We formalize the cold-start user problem by asking what are the b best items we should recommend to a cold-start user, in order to learn her pro€le most accurately, where b , a given budget, is typically a small number. We formalize the problem as an optimization problem and present multiple non-trivial results, including NP-hardness as well as hardness of approximation. We furthermore show that the objective function, i.e., the least square error of the learned pro€lew.r.t. the true user pro€le, is neither submodular nor supermodular, suggesting ecient approximations are unlikely to exist. Finally, we discuss several scalable heuristic approaches for identifying the b best items to recommend to the user and experimentally evaluate their performance on 4 real datasets. Our experiments show that our proposed accelerated algorithms signi€cantly outperform the prior art in runnning time, while achieving similar error in the learned user pro€le as well as in the rating predictions. ACM Reference format: Sampoorna Biswas, Laks V.S. Lakshmanan, and Senjuti Basu Ray. 2016. Combating the Cold Start User Problem in Model Based Collaborative Filtering. In Proceedings of ACM Conference, Washington, DC, USA, July 2017 (Conference’17), 11 pages. DOI: 10.1145/nnnnnnn.nnnnnnn", "title": "" }, { "docid": "a85b3be3060e64961b1d80b792d8cc63", "text": "Replisomes are the protein assemblies that replicate DNA. They function as molecular motors to catalyze template-mediated polymerization of nucleotides, unwinding of DNA, the synthesis of RNA primers, and the assembly of proteins on DNA. The replisome of bacteriophage T7 contains a minimum of proteins, thus facilitating its study. This review describes the molecular motors and coordination of their activities, with emphasis on the T7 replisome. Nucleotide selection, movement of the polymerase, binding of the processivity factor, unwinding of DNA, and RNA primer synthesis all require conformational changes and protein contacts. Lagging-strand synthesis is mediated via a replication loop whose formation and resolution is dictated by switches to yield Okazaki fragments of discrete size. Both strands are synthesized at identical rates, controlled by a molecular brake that halts leading-strand synthesis during primer synthesis. The helicase serves as a reservoir for polymerases that can initiate DNA synthesis at the replication fork. We comment on the differences in other systems where applicable.", "title": "" }, { "docid": "04b66d9285404e7fb14fcec3cd66316a", "text": "Amazon Aurora is a relational database service for OLTP workloads offered as part of Amazon Web Services (AWS). In this paper, we describe the architecture of Aurora and the design considerations leading to that architecture. We believe the central constraint in high throughput data processing has moved from compute and storage to the network. Aurora brings a novel architecture to the relational database to address this constraint, most notably by pushing redo processing to a multi-tenant scale-out storage service, purpose-built for Aurora. We describe how doing so not only reduces network traffic, but also allows for fast crash recovery, failovers to replicas without loss of data, and fault-tolerant, self-healing storage. We then describe how Aurora achieves consensus on durable state across numerous storage nodes using an efficient asynchronous scheme, avoiding expensive and chatty recovery protocols. Finally, having operated Aurora as a production service for over 18 months, we share the lessons we have learnt from our customers on what modern cloud applications expect from databases.", "title": "" }, { "docid": "997a1ec16394a20b3a7f2889a583b09d", "text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.", "title": "" }, { "docid": "83856fb0a5e53c958473fdf878b89b20", "text": "Due to the expensive nature of an industrial robot, not all universities are equipped with areal robots for students to operate. Learning robotics without accessing to an actual robotic system has proven to be difficult for undergraduate students. For instructors, it is also an obstacle to effectively teach fundamental robotic concepts. Virtual robot simulator has been explored by many researchers to create a virtual environment for teaching and learning. This paper presents structure of a course project which requires students to develop a virtual robot simulator. The simulator integrates concept of kinematics, inverse kinematics and controls. Results show that this approach assists and promotes better students‟ understanding of robotics.", "title": "" }, { "docid": "1eab21d97bb15cd18648e66383f8f572", "text": "Indoor localization of smart hand-held devices is essential for location-based services of pervasive applications. The previous research mainly focuses on exploring wireless signal fingerprints for this purpose, and several shortcomings need to be addressed first before real-world usage, e.g., demanding a large number of access points or labor-intensive site survey. In this paper, through a systematic empirical study, we first gain in-depth understandings of Bluetooth characteristics, i.e., the impact of various factors, such as distance, orientation, and obstacles on the Bluetooth received signal strength indicator (RSSI). Then, by mining from historical data, a novel localization model is built to describe the relationship between the RSSI and the device location. On this basis, we present an energy-efficient indoor localization scheme that leverages user motions to iteratively shrink the search space to locate the target device. An Motion-assisted Device Tracking Algorithm has been prototyped and evaluated in several real-world scenarios. Extensive experiments show that our algorithm is efficient in terms of localization accuracy, searching time and energy consumption.", "title": "" }, { "docid": "4d6de998e172132488b56304f742b843", "text": "Wearable devices such as Microsoft Hololens and Google glass are highly popular in recent years. As traditional input hardware is difficult to use on such platforms, vision-based hand pose tracking and gesture control techniques are more suitable alternatives. This demo shows the possibility to interact with 3D contents with bare hands on wearable devices by two Augmented Reality applications, including virtual teapot manipulation and fountain animation in hand. Technically, we use a head-mounted depth camera to capture the RGB-D images from egocentric view, and adopt the random forest to regress for the palm pose and classify the hand gesture simultaneously via a spatial-voting framework. The predicted pose and gesture are used to render the 3D virtual objects, which are overlaid onto the hand region in input RGB images with camera calibration parameters for seamless virtual and real scene synthesis.", "title": "" }, { "docid": "af9931dbd56100f8b9ea3004d7d43b25", "text": "Solvent-free microwave extraction (SFME) has been proposed as a green method for the extraction of essential oil from aromatic herbs that are extensively used in the food industry. This technique is a combination of microwave heating and dry distillation performed at atmospheric pressure without any added solvent or water. The isolation and concentration of volatile compounds is performed in a single stage. In this work, SFME and a conventional technique, hydro-distillation HD (Clevenger apparatus), are used for the extraction of essential oil from rosemary (Rosmarinus officinalis L.) and are compared. This preliminary laboratory study shows that essential oils extracted by SFME in 30min were quantitatively (yield and kinetics profile) and qualitatively (aromatic profile) similar to those obtained using conventional hydro-distillation in 2h. Experiments performed in a 75L pilot microwave reactor prove the feasibility of SFME up scaling and potential industrial applications.", "title": "" }, { "docid": "4a7ed4868ff279b4d83f969076fb91e9", "text": "Information theoretic measures form a fundamental class of measures for comparing clusterings, and have recently received increasing interest. Neverthel ss, a number of questions concerning their properties and inter-relationships remain unresolv ed. In this paper, we perform an organized study of information theoretic measures for clustering com parison, including several existing popular measures in the literature, as well as some newly propos ed nes. We discuss and prove their important properties, such as the metric property and the no rmalization property. We then highlight to the clustering community the importance of correct ing information theoretic measures for chance, especially when the data size is small compared to th e number of clusters present therein. Of the available information theoretic based measures, we a dvocate the normalized information distance (NID) as a general measure of choice, for it possess e concurrently several important properties, such as being both a metric and a normalized meas ure, admitting an exact analytical adjusted-for-chance form, and using the nominal [0,1] range better than other normalized variants.", "title": "" }, { "docid": "c14c575eed397c522a3bc0d2b766a836", "text": "Being highly unsaturated, carotenoids are susceptible to isomerization and oxidation during processing and storage of foods. Isomerization of trans-carotenoids to cis-carotenoids, promoted by contact with acids, heat treatment and exposure to light, diminishes the color and the vitamin A activity of carotenoids. The major cause of carotenoid loss, however, is enzymatic and non-enzymatic oxidation, which depends on the availability of oxygen and the carotenoid structure. It is stimulated by light, heat, some metals, enzymes and peroxides and is inhibited by antioxidants. Data on percentage losses of carotenoids during food processing and storage are somewhat conflicting, but carotenoid degradation is known to increase with the destruction of the food cellular structure, increase of surface area or porosity, length and severity of the processing conditions, storage time and temperature, transmission of light and permeability to O2 of the packaging. Contrary to lipid oxidation, for which the mechanism is well established, the oxidation of carotenoids is not well understood. It involves initially epoxidation, formation of apocarotenoids and hydroxylation. Subsequent fragmentations presumably result in a series of compounds of low molecular masses. Completely losing its color and biological activities, the carotenoids give rise to volatile compounds which contribute to the aroma/flavor, desirable in tea and wine and undesirable in dehydrated carrot. Processing can also influence the bioavailability of carotenoids, a topic that is currently of great interest.", "title": "" }, { "docid": "6126a101cf55448f0c9ac4dbf98bc690", "text": "This paper studies the energy conversion efficiency for a rectified piezoelectric power harvester. An analytical model is proposed, and an expression of efficiency is derived under steady-state operation. In addition, the relationship among the conversion efficiency, electrically induced damping and ac–dc power output is established explicitly. It is shown that the optimization criteria are different depending on the relative strength of the coupling. For the weak electromechanical coupling system, the optimal power transfer is attained when the efficiency and induced damping achieve their maximum values. This result is consistent with that observed in the recent literature. However, a new finding shows that they are not simultaneously maximized in the strongly coupled electromechanical system.", "title": "" }, { "docid": "efcf84406a2218deeb4ca33cb8574172", "text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.", "title": "" }, { "docid": "ae2473ab9c004afd6908f32c7be1fd90", "text": "Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods of detection involve extensive use of auditing, where a trained individual manually observes reports or transactions in an attempt to discover fraudulent behaviour. This method is not only time consuming, expensive and inaccurate, but in the age of big data it is also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive investigation on financial fraud detection practices using such data mining methods, with a particular focus on computational intelligence-based techniques. Classification of the practices based on key aspects such as detection algorithm used, fraud type investigated, and success rate have been covered. Issues and challenges associated with the current practices and potential future direction of research have also been identified.", "title": "" }, { "docid": "13d46ebaf6fb6700258bb0d44ce9d1ee", "text": "Fatigue is the most prevalent and debilitating symptom experienced by breast cancer patients receiving adjuvant chemotherapy or radiation therapy and few evidence-based treatments are available to manage this distressing side-effect. The purpose of this multi-institutional randomized controlled trial was to determine the effects of exercise on fatigue levels during treatment for breast cancer. Sedentary women (N=119) with Stage 0-III breast cancer receiving outpatient adjuvant chemotherapy or radiation therapy were randomized to a home-based moderate-intensity walking exercise program or to usual care for the duration of their cancer treatment. Of participants randomized to exercise, 72% adhered to the exercise prescription; 61% of the usual care group adhered. The intention-to-treat analysis revealed no group differences in part because of a dilution of treatment effect as 39% of the usual care group exercised and 28% of the exercise group did not. When exercise participation was considered using the data analysis method of instrumental variables with principal stratification, a clinically important and statistically significant (p=0.03) effect of exercise on pretest-to-posttest change in fatigue levels was demonstrated. Adherence to a home-based moderate-intensity walking exercise program may effectively mitigate the high levels of fatigue prevalent during cancer treatment.", "title": "" }, { "docid": "b5a64e072961be91e6ee92e8a6689596", "text": "Cortical bone supports and protects our skeletal functions and it plays an important in determining bone strength and fracture risks. Cortical bone segmentation is needed for quantitative analyses and the task is nontrivial for in vivo multi-row detector CT (MD-CT) imaging due to limited resolution and partial volume effects. An automated cortical bone segmentation algorithm for in vivo MD-CT imaging of distal tibia is presented. It utilizes larger contextual and topologic information of the bone using a modified fuzzy distance transform and connectivity analyses. An accuracy of 95.1% in terms of volume of agreement with true segmentations and a repeat MD-CT scan intra-class correlation of 98.2% were observed in a cadaveric study. An in vivo study involving 45 age-similar and height-matched pairs of male and female volunteers has shown that, on an average, male subjects have 16.3% thicker cortex and 4.7% increased porosity as compared to females.", "title": "" }, { "docid": "0c8fb6cc1d252429c7e1dc5b01c14910", "text": "We present a generative attribute controller (GAC), a novel functionality for generating or editing an image while intuitively controlling large variations of an attribute. This controller is based on a novel generative model called the conditional filtered generative adversarial network (CFGAN), which is an extension of the conventional conditional GAN (CGAN) that incorporates a filtering architecture into the generator input. Unlike the conventional CGAN, which represents an attribute directly using an observable variable (e.g., the binary indicator of attribute presence) so its controllability is restricted to attribute labeling (e.g., restricted to an ON or OFF control), the CFGAN has a filtering architecture that associates an attribute with a multi-dimensional latent variable, enabling latent variations of the attribute to be represented. We also define the filtering architecture and training scheme considering controllability, enabling the variations of the attribute to be intuitively controlled using typical controllers (radio buttons and slide bars). We evaluated our CFGAN on MNIST, CUB, and CelebA datasets and show that it enables large variations of an attribute to be not only represented but also intuitively controlled while retaining identity. We also show that the learned latent space has enough expressive power to conduct attribute transfer and attribute-based image retrieval.", "title": "" }, { "docid": "8f750438e7d78873fd33174d2e347ea5", "text": "This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.", "title": "" }, { "docid": "5cc929181c4a8ab7538b7bfc68015cf9", "text": "The IGBT can run into different short-circuit types (SC I, SC II, SC III). Especially in SC II and III, an interaction between the gate drive unit and the IGBT takes place. A self-turn-off mechanism after short-circuit turn on can occur. Parasitic elements in the connection between the IGBT and the gate unit as well as asymmetrical wiring of devices connected in parallel are of effect to the short-circuit capability. In high-voltage IGBTs, filament formation can occur at short-circuit condition. Destructive measurements with its failure patterns and short-circuit protection methods are shown.", "title": "" }, { "docid": "910e05ecfa114b933c0722ab996b6bad", "text": "This paper deals with an effective power transfer scheme between the solar photovoltaic (PV) array and single-phase grid, feeding a field-oriented-controlled (FOC) permanent-magnet synchronous motor (PMSM) drive applied to a water-pumping system (WPS). Owing to the intermittency associated with solar (PV) system, the requirement of constant water supply is not possible with the standalone system. In order to mitigate this, a grid-intergraded WPS is proposed here. The grid integration enables the consumer an uninterrupted operation of water pump irrespective of solar insolation level. Moreover, the PV power can be fed to the utility grid when water pumping is not required. To make it possible, one voltage-source converter (VSC) and one voltage-source inverter connected to a common dc link are used for utility grid and PMSM control, respectively. The unit vector template theory is utilized to generate switching pulses for VSC to control the bidirectional power flow between the solar PV system and utility grid through the common dc link. A sensorless FOC is used to drive the PMSM coupled to the water pump. An intermediate stage boost converter is used for extracting optimum power from solar PV array under variable insolation. A perturb and observe algorithm is used for generating the duty ratio for the maximum power point (MPP) operation. The applicability of overall system constituting utility grid in conjugation with PV array fed PMSM-coupled water pump ensuring bidirectional power flow control with MPP tracking of PV array and abiding the utility grid, IEEE-519 standard for power factor, and total harmonic distortion is simulated in MATLAB/Simulink environment with SimPowerSystem toolbox and validated on a prototype developed in the laboratory. The system prototype is tested under variable solar insolation and grid abnormalities such as voltage sag and voltage swell.", "title": "" }, { "docid": "613483d7c91a9df5e0c305ac04c1bc13", "text": "This paper aims at the annotation of movement phrases in Vietnamese folk dance videos that were mainly gathered, stored and used in teaching at art schools and in preserving cultural intangible heritages (performed by different famous folk dance masters). We propose a framework of automatic movement phrase annotation, in which the motion vectors are used as movement phrase features. Movement phrase classification can be carried out, based on dancer’s trajectories. A deep investigation of Vietnamese folk dance gives an idea of using optical flow as movement phrase features in movement phrase detection and classification. For the richness and usefulness in annotation of Vietnamese folk dance, a lookup table of movement phrase descriptions is defined. In initial experiments, a sample movement phrase dataset is built up to train k-NN classification model. Experiments have shown the effectiveness of the proposed framework of automatic movement phrase annotation with classification accuracy at least 88%.", "title": "" } ]
scidocsrr
a574dce9f1e9f04551b5206b1a8362f0
Yin-yang: concealing the deep embedding of DSLs
[ { "docid": "7dcdf69f47a0a56d437cc8b7ea5352a6", "text": "A wide range of domain-specific languages (DSLs) has been implemented successfully by embedding them in general purpose languages. This paper reviews embedding, and summarizes how two alternative techniques—staged interpreters and templates—can be used to overcome the limitations of embedding. Both techniques involve a form of generative programming. The paper reviews and compares three programming languages that have special support for generative programming. Two of these languages (MetaOCaml and Template Haskell) are research languages, while the third (C++) is already in wide industrial use. The paper identifies several dimensions that can serve as a basis for comparing generative languages.", "title": "" } ]
[ { "docid": "8d90b9fbf7af1ea36f93f88e6ce11ba2", "text": "Given its serious implications for psychological and socio-emotional health, the prevention of problem gambling among adolescents is increasingly acknowledged as an area requiring attention. The theory of planned behavior (TPB) is a well-established model of behavior change that has been studied in the development and evaluation of primary preventive interventions aimed at modifying cognitions and behavior. However, the utility of the TPB has yet to be explored as a framework for the development of adolescent problem gambling prevention initiatives. This paper first examines the existing empirical literature addressing the effectiveness of school-based primary prevention programs for adolescent gambling. Given the limitations of existing programs, we then present a conceptual framework for the integration of the TPB in the development of effective problem gambling preventive interventions. The paper describes the TPB, demonstrates how the framework has been applied to gambling behavior, and reviews the strengths and limitations of the model for the design of primary prevention initiatives targeting adolescent risk and addictive behaviors, including adolescent gambling.", "title": "" }, { "docid": "b23d6350c5751e5250883edb16db9a9e", "text": "We present a novel anthropometric three dimensional (Anthroface 3D) face recognition algorithm, which is based on a systematically selected set of discriminatory structural characteristics of the human face derived from the existing scientific literature on facial anthropometry. We propose a novel technique for automatically detecting 10 anthropometric facial fiducial points that are associated with these discriminatory anthropometric features. We isolate and employ unique textural and/or structural characteristics of these fiducial points, along with the established anthropometric facial proportions of the human face for detecting them. Lastly, we develop a completely automatic face recognition algorithm that employs facial 3D Euclidean and geodesic distances between these 10 automatically located anthropometric facial fiducial points and a linear discriminant classifier. On a database of 1149 facial images of 118 subjects, we show that the standard deviation of the Euclidean distance of each automatically detected fiducial point from its manually identified position is less than 2.54 mm. We further show that the proposed Anthroface 3D recognition algorithm performs well (equal error rate of 1.98% and a rank 1 recognition rate of 96.8%), out performs three of the existing benchmark 3D face recognition algorithms, and is robust to the observed fiducial point localization errors.", "title": "" }, { "docid": "ba6fe1b26d76d7ff3e84ddf3ca5d3e35", "text": "The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5- to 7-year-olds; N = 36) were presented with science lessons on 1 of 3 schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum.", "title": "" }, { "docid": "9fbd120bf56dd277534192f34385dabc", "text": "The D3 JavaScript library has become a ubiquitous tool for developing visualizations on the Web. Yet, once a D3 visualization is published online its visual style is difficult to change. We present a pair of tools for deconstructing and restyling existing D3 visualizations. Our deconstruction tool analyzes a D3 visualization to extract the data, the marks and the mappings between them. Our restyling tool lets users modify the visual attributes of the marks as well as the mappings from the data to these attributes. Together our tools allow users to easily modify D3 visualizations without examining the underlying code and we show how they can be used to deconstruct and restyle a variety of D3 visualizations.", "title": "" }, { "docid": "b39904ccd087e59794cf2cc02e5d2644", "text": "In this paper, we propose a novel walking method for torque controlled robots. The method is able to produce a wide range of speeds without requiring off-line optimizations and re-tuning of parameters. We use a quadratic whole-body optimization method running online which generates joint torques, given desired Cartesian accelerations of center of mass and feet. Using a dynamics model of the robot inside this optimizer, we ensure both compliance and tracking, required for fast locomotion. We have designed a foot-step planner that uses a linear inverted pendulum as simplified robot internal model. This planner is formulated as a quadratic convex problem which optimizes future steps of the robot. Fast libraries help us performing these calculations online. With very few parameters to tune and no perception, our method shows notable robustness against strong external pushes, relatively large terrain variations, internal noises, model errors and also delayed communication.", "title": "" }, { "docid": "16da6b46cd53304923720ba4b5e92427", "text": "Despite its unambiguous advantages, cellular phone use has been associated with harmful or potentially disturbing behaviors. Problematic use of the mobile phone is considered as an inability to regulate one’s use of the mobile phone, which eventually involves negative consequences in daily life (e.g., financial problems). The current article describes what can be considered dysfunctional use of the mobile phone and emphasizes its multifactorial nature. Validated assessment instruments to measure problematic use of the mobile phone are described. The available literature on risk factors for dysfunctional mobile phone use is then reviewed, and a pathways model that integrates the existing literature is proposed. Finally, the assumption is made that dysfunctional use of the mobile phone is part of a spectrum of cyber addictions that encompasses a variety of dysfunctional behaviors and implies involvement in specific online activities (e.g., video games, gambling, social networks, sex-related websites).", "title": "" }, { "docid": "900cd18e168e2616304e7425c807e753", "text": "Diabetes prevention requires lifestyle changes, and traditional educational programs for lifestyle changes have had low attendance rates in ethnic populations. This article describes the development and implementation of an educational program, emphasizing retention strategies, cultural tailoring and community participation. Community-based participatory research approaches were used to adapt and test the feasibility of a culturally tailored lifestyle intervention (named Health is Wealth) for Filipino-American adults at risk for diabetes (n = 40) in order to increase program attendance. A unique feature of this program was the flexibility of scheduling the eight classes, and inclusion of activities, foods and proverbs consistent with Filipino culture. We found that with this approach, overall program attendance for the experimental and wait-listed control groups was 88% and participant satisfaction was high with 93% very satisfied. Flexible scheduling, a bilingual facilitator for the classes, and the community-academic partnership contributed to the high attendance for this lifestyle intervention.", "title": "" }, { "docid": "3a709dd22392905d05fd4d737597ad4d", "text": "Lung cancer is the most common cancer that cannot be ignored and cause death with late health care. Currently, CT can be used to help doctors detect the lung cancer in the early stages. In many cases, the diagnosis of identifying the lung cancer depends on the experience of doctors, which may ignore some patients and cause some problems. Deep learning has been proved as a popular and powerful method in many medical imaging diagnosis areas. In this paper, three types of deep neural networks (e.g., CNN, DNN, and SAE) are designed for lung cancer calcification. Those networks are applied to the CT image classification task with some modification for the benign and malignant lung nodules. Those networks were evaluated on the LIDC-IDRI database. The experimental results show that the CNN network archived the best performance with an accuracy of 84.15%, sensitivity of 83.96%, and specificity of 84.32%, which has the best result among the three networks.", "title": "" }, { "docid": "cdfcc894d32c9a6a3a076d3e978d400f", "text": "The earliest Convolution Neural Network (CNN) model is leNet-5 model proposed by LeCun in 1998. However, in the next few years, the development of CNN had been almost stopped until the article ‘Reducing the dimensionality of data with neural networks’ presented by Hinton in 2006. CNN started entering a period of rapid development. AlexNet won the championship in the image classification contest of ImageNet with the huge superiority of 11% beyond the second place in 2012, and the proposal of DeepFace and DeepID, as two relatively successful models for high-performance face recognition and authentication in 2014, marking the important position of CNN. Convolution Neural Network (CNN) is an efficient recognition algorithm widely used in image recognition and other fields in recent years. That the core features of CNN include local field, shared weights and pooling greatly reducing the parameters, as well as simple structure, make CNN become an academic focus. In this paper, the Convolution Neural Network’s history and structure are summarized. And then several areas of Convolutional Neural Network applications are enumerated. At last, some new insights for the future research of CNN are presented.", "title": "" }, { "docid": "e0ae0929df9b396d35f02c8dc2e2487a", "text": "While selecting the hyper-parameters of Neural Networks (NNs) has been so far treated as an art, the emergence of more complex, deeper architectures poses increasingly more challenges to designers and Machine Learning (ML) practitioners, especially when power and memory constraints need to be considered. In this work, we propose HyperPower, a framework that enables efficient Bayesian optimization and random search in the context of power- and memory-constrained hyperparameter optimization for NNs running on a given hardware platform. HyperPower is the first work (i) to show that power consumption can be used as a low-cost, a priori known constraint, and (ii) to propose predictive models for the power and memory of NNs executing on GPUs. Thanks to HyperPower, the number of function evaluations and the best test error achieved by a constraint-unaware method are reached up to 112.99× and 30.12× faster, respectively, while never considering invalid configurations. HyperPower significantly speeds up the hyper-parameter optimization, achieving up to 57.20× more function evaluations compared to constraint-unaware methods for a given time interval, effectively yielding significant accuracy improvements by up to 67.6%.", "title": "" }, { "docid": "7b5331b0e6ad693fc97f5f3b543bf00c", "text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.", "title": "" }, { "docid": "5203f520e6992ae6eb2e8cb28f523f6a", "text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.", "title": "" }, { "docid": "20bb169da47abfd4c798ddecca8f2abe", "text": "The following coins problem is a version of a multi-armed bandit problem where one has to select from among a set of objects, say classifiers, after an experimentation phase that is constrained by a time or cost budget. The question is how to spend the budget. The problem involves pure exploration only, differentiating it from typical multi-armed bandit problems involving an exploration/exploitation tradeoff [BF85]. It is an abstraction of the following scenarios: choosing from among a set of alternative treatments after a fixed number of clinical trials, determining the best parameter settings for a program given a deadline that only allows a fixed number of runs; or choosing a life partner in the bachelor/bachelorette TV show where time is limited. We are interested in the computational complexity of the coins problem and/or efficient algorithms with approximation guarantees.", "title": "" }, { "docid": "d3d3300ffbb9376080d8854131342fb7", "text": " However, like all powerful technologies, great care must be taken in its development and deployment. To reap the societal benefits of AI systems, we will first need to trust them and make sure that they follow the same ethical principles, moral values, professional codes, and social norms that we humans would follow in the same scenario. Research and educational efforts, as well as carefully designed regulations, must be put in place to achieve this goal.", "title": "" }, { "docid": "ca32fb4df9c03951e14ce9e06f7d90a0", "text": "Future wireless local area networks (WLANs) are expected to serve thousands of users in diverse environments. To address the new challenges that WLANs will face, and to overcome the limitations that previous IEEE standards introduced, a new IEEE 802.11 amendment is under development. IEEE 802.11ax aims to enhance spectrum efficiency in a dense deployment; hence system throughput improves. Dynamic Sensitivity Control (DSC) and BSS Color are the main schemes under consideration in IEEE 802.11ax for improving spectrum efficiency In this paper, we evaluate DSC and BSS Color schemes when physical layer capture (PLC) is modelled. PLC refers to the case that a receiver successfully decodes the stronger frame when collision occurs. It is shown, that PLC could potentially lead to fairness issues and higher throughput in specific cases. We study PLC in a small and large scale scenario, and show that PLC could also improve fairness in specific scenarios.", "title": "" }, { "docid": "f783860e569d9f179466977db544bd01", "text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.", "title": "" }, { "docid": "543a0cdc8101c6f253431c8a4d697be6", "text": "While significant progress has been made in the image captioning task, video description is still comparatively in its infancy, due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the main issues are the fluency and coherence of the generated descriptions, and their relevance to the video. Recently, reinforcement and adversarial learning based methods have been explored to improve the image captioning models; however, both types of methods suffer from a number of issues, e.g. poor readability and high redundancy for RL and stability issues for GANs. In this work, we instead propose to apply adversarial techniques during inference, designing a discriminator which encourages better multi-sentence video description. In addition, we find that a multi-discriminator “hybrid” design, where each discriminator targets one aspect of a description, leads to the best results. Specifically, we decouple the discriminator to evaluate on three criteria: 1) visual relevance to the video, 2) language diversity and fluency, and 3) coherence across sentences. Our approach results in more accurate, diverse and coherent multi-sentence video descriptions, as shown by automatic as well as human evaluation on the popular ActivityNet Captions dataset.", "title": "" }, { "docid": "4c1c72fde3bbe25f6ff3c873a87b86ba", "text": "The purpose of this study was to translate the Foot Function Index (FFI) into Italian, to perform a cross-cultural adaptation and to evaluate the psychometric properties of the Italian version of FFI. The Italian FFI was developed according to the recommended forward/backward translation protocol and evaluated in patients with foot and ankle diseases. Feasibility, reliability [intraclass correlation coefficient (ICC)], internal consistency [Cronbach’s alpha (CA)], construct validity (correlation with the SF-36 and a visual analogue scale (VAS) assessing for pain), responsiveness to surgery were assessed. The standardized effect size and standardized response mean were also evaluated. A total of 89 patients were recruited (mean age 51.8 ± 13.9 years, range 21–83). The Italian version of the FFI consisted in 18 items separated into a pain and disability subscales. CA value was 0.95 for both the subscales. The reproducibility was good with an ICC of 0.94 and 0.91 for pain and disability subscales, respectively. A strong correlation was found between the FFI and the scales of the SF-36 and the VAS with related content, particularly in the areas of physical function and pain was observed indicating good construct validity. After surgery, the mean FFI improved from 55.9 ± 24.8 to 32.4 ± 26.3 for the pain subscale and from 48.8 ± 28.8 to 24.9 ± 23.7 for the disability subscale (P < 0.01). The Italian version of the FFI showed satisfactory psychometric properties in Italian patients with foot and ankle diseases. Further testing in different and larger samples is required in order to ensure the validity and reliability of this score.", "title": "" }, { "docid": "79a9208d16541c7ed4fbc9996a82ef6a", "text": "Query processing in data integration occurs over network-bound, autonomous data sources. This requires extensions to traditional optimization and execution techniques for three reasons: there is an absence of quality statistics about the data, data transfer rates are unpredictable and bursty, and slow or unavailable data sources can often be replaced by overlapping or mirrored sources. This paper presents the Tukwila data integration system, designed to support adaptivity at its core using a two-pronged approach. Interleaved planning and execution with partial optimization allows Tukwila to quickly recover from decisions based on inaccurate estimates. During execution, Tukwila uses adaptive query operators such as the double pipelined hash join, which produces answers quickly, and the dynamic collector, which robustly and efficiently computes unions across overlapping data sources. We demonstrate that the Tukwila architecture extends previous innovations in adaptive execution (such as query scrambling, mid-execution re-optimization, and choose nodes), and we present experimental evidence that our techniques result in behavior desirable for a data integration system.", "title": "" }, { "docid": "c34cadf2a05909bb659e0e52f77dd0c3", "text": "The present complexity in designing web applications makes software security a difficult goal to achieve. An attacker can explore a deployed service on the web and attack at his/her own leisure. Moving Target Defense (MTD) in web applications is an effective mechanism to nullify this advantage of their reconnaissance but the framework demands a good switching strategy when switching between multiple configurations for its web-stack. To address this issue, we propose the modeling of a real world MTD web application as a repeated Bayesian game. We formulate an optimization problem that generates an effective switching strategy while considering the cost of switching between different web-stack configurations. To use this model for a developed MTD system, we develop an automated system for generating attack sets of Common Vulnerabilities and Exposures (CVEs) for input attacker types with predefined capabilities. Our framework obtains realistic reward values for the players (defenders and attackers) in this game by using security domain expertise on CVEs obtained from the National Vulnerability Database (NVD). We also address the issue of prioritizing vulnerabilities that when fixed, improves the security of the MTD system. Lastly, we demonstrate the robustness of our proposed model by evaluating its performance when there is uncertainty about input attacker information.", "title": "" } ]
scidocsrr
cd91387807e20c7622e5adb99fdc12ff
The Co-Design Method for Robust Satisfactory H∞ / H2 Fault-TolerantEvent-Triggered Control of NCS with α - Safety Degree
[ { "docid": "26560de19573a47065e23150a6a56047", "text": "In this note, we revisit the problem of scheduling stabilizing control tasks on embedded processors. We start from the paradigm that a real-time scheduler could be regarded as a feedback controller that decides which task is executed at any given instant. This controller has for objective guaranteeing that (control unrelated) software tasks meet their deadlines and that stabilizing control tasks asymptotically stabilize the plant. We investigate a simple event-triggered scheduler based on this feedback paradigm and show how it leads to guaranteed performance thus relaxing the more traditional periodic execution requirements.", "title": "" } ]
[ { "docid": "d1114f1ced731a700d40dd97fe62b82b", "text": "Agricultural sector is playing vital role in Indian economy, in which irrigation mechanism is of key concern. Our paper aims to find the exact field condition and to control the wastage of water in the field and to provide exact controlling of field by using the drip irrigation, atomizing the agricultural environment by using the components and building the necessary hardware. For the precisely monitoring and controlling of the agriculture filed, different types of sensors were used. To implement the proposed system ARM LPC2148 Microcontroller is used. The irrigation mechanism is monitored and controlled more efficiently by the proposed system, which is a real time feedback control system. GSM technology is used to inform the end user about the exact field condition. Actually this method of irrigation system has been proposed primarily to save resources, yield of crops and farm profitability.", "title": "" }, { "docid": "d5ee4e1cb286009fc5c3d398eade8b35", "text": "In this paper we present an end-to-end deep learning framework to turn images that show dynamic content, such as vehicles or pedestrians, into realistic static frames. This objective encounters two main challenges: detecting all the dynamic objects, and inpainting the static occluded background with plausible imagery. The second problem is approached with a conditional generative adversarial model that, taking as input the original dynamic image and its dynamic/static binary mask, is capable of generating the final static image. The former challenge is addressed by the use of a convolutional network that learns a multi-class semantic segmentation of the image. These generated images can be used for applications such as augmented reality or vision-based robot localization purposes. To validate our approach, we show both qualitative and quantitative comparisons against other state-of-the-art inpainting methods by removing the dynamic objects and hallucinating the static structure behind them. Furthermore, to demonstrate the potential of our results, we carry out pilot experiments that show the benefits of our proposal for visual place recognition.", "title": "" }, { "docid": "ff2d26414f670001892f6bdc371156d0", "text": "The increase of voice-based interaction has changed the way people seek information, making search more conversational. Development of effective conversational approaches to search requires better understanding of how people express information needs in dialogue. This paper describes the creation and examination of over 32K spoken utterances collected during 34 hours of collaborative search tasks. The contribution of this work is three-fold. First, we propose a model of conversational information needs (CINs) based on a synthesis of relevant theories in Information Seeking and Retrieval. Second, we show several behavioural patterns of CINs based on the proposed model. Third, we identify effective feature groups that may be useful for detecting CINs categories from conversations. This paper concludes with a discussion of how these findings can facilitate advance of conversational search applications.", "title": "" }, { "docid": "bcd7af5c474d931c0a76b654775396c2", "text": "Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level “option policies” that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.", "title": "" }, { "docid": "d5d96493b34cfbdf135776e930ec5979", "text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.", "title": "" }, { "docid": "a27d4083741f75f44cd85a8161f1b8b1", "text": "Graves’ disease (GD) and Hashimoto's thyroiditis (HT) represent the commonest forms of autoimmune thyroid disease (AITD) each presenting with distinct clinical features. Progress has been made in determining association of HLA class II DRB1, DQB1 and DQA1 loci with GD demonstrating a predisposing effect for DR3 (DRB1*03-DQB1*02-DQA1*05) and a protective effect for DR7 (DRB1*07-DQB1*02-DQA1*02). Small data sets have hindered progress in determining HLA class II associations with HT. The aim of this study was to investigate DRB1-DQB1-DQA1 in the largest UK Caucasian HT case control cohort to date comprising 640 HT patients and 621 controls. A strong association between HT and DR4 (DRB1*04-DQB1*03-DQA1*03) was detected (P=6.79 × 10−7, OR=1.98 (95% CI=1.51–2.59)); however, only borderline association of DR3 was found (P=0.050). Protective effects were also detected for DR13 (DRB1*13-DQB1*06-DQA1*01) (P=0.001, OR=0.61 (95% CI=0.45–0.83)) and DR7 (P=0.013, OR=0.70 (95% CI=0.53–0.93)). Analysis of our unique cohort of subjects with well characterized AITD has demonstrated clear differences in association within the HLA class II region between HT and GD. Although HT and GD share a number of common genetic markers this study supports the suggestion that differences in HLA class II genotype may, in part, contribute to the different immunopathological processes and clinical presentation of these related diseases.", "title": "" }, { "docid": "222ab6804b3fe15fe23b27bc7f5ede5f", "text": "Single-image super-resolution (SR) reconstruction via sparse representation has recently attracted broad interest. It is known that a low-resolution (LR) image is susceptible to noise or blur due to the degradation of the observed image, which would lead to a poor SR performance. In this paper, we propose a novel robust edge-preserving smoothing SR (REPS-SR) method in the framework of sparse representation. An EPS regularization term is designed based on gradient-domain-guided filtering to preserve image edges and reduce noise in the reconstructed image. Furthermore, a smoothing-aware factor adaptively determined by the estimation of the noise level of LR images without manual interference is presented to obtain an optimal balance between the data fidelity term and the proposed EPS regularization term. An iterative shrinkage algorithm is used to obtain the SR image results for LR images. The proposed adaptive smoothing-aware scheme makes our method robust to different levels of noise. Experimental results indicate that the proposed method can preserve image edges and reduce noise and outperforms the current state-of-the-art methods for noisy images.", "title": "" }, { "docid": "03a18f34ee67c579b4dd785e3ebd9baa", "text": "Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible. This paper shows that by careful crafting and accounting for the weak information in the sensor samples, smartphones are capable of pure inertial navigation. We present a probabilistic approach for orientation and use-case free inertial odometry, which is based on double-integrating rotated accelerations. The strength of the model is in learning additive and multiplicative IMU biases online. We are able to track the phone position, velocity, and pose in realtime and in a computationally lightweight fashion by solving the inference with an extended Kalman filter. The information fusion is completed with zero-velocity updates (if the phone remains stationary), altitude correction from barometric pressure readings (if available), and pseudo-updates constraining the momentary speed. We demonstrate our approach using an iPad and iPhone in several indoor dead-reckoning applications and in a measurement tool setup.", "title": "" }, { "docid": "cf131167592f02790a1b4e38ed3b5375", "text": "Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.", "title": "" }, { "docid": "be0033b0f251970f8a8876b28cd2042e", "text": "A power transformer will yield a frequency response which is unique to its mechanical geometry and electrical properties. Changes in the frequency response of a transformer can be potential indicators of winding deformation as well as other structural and electrical problems. A diagnostic tool which leverages this knowledge in order to detect such changes is frequency-response analysis (FRA). To date, FRA has been used to identify changes in a transformer's frequency response but with limited insight into the underlying cause of the change. However, there is now a growing research interest in specifically identifying the structural change in a transformer directly from its FRA signature. The aim of this paper is to support FRA interpretation through the development of wideband three-phase transformer models which are based on three types of FRA tests. The resulting models can be used as a flexible test bed for parameter sensitivity analysis, leading to greater insight into the effects that geometric change can have on transformer FRA. This paper will demonstrate the applicability of this modeling approach by simultaneously fitting each model to the corresponding FRA data sets without a priori knowledge of the transformer's internal dimensions, and then quantitatively assessing the accuracy of key model parameters.", "title": "" }, { "docid": "e5c6ed3e71cb971b5766a18facbc76f3", "text": "The main objective of the present paper is to develop a smart wireless sensor network (WSN) for an agricultural environment. Monitoring agricultural environment for various factors such as temperature and humidity along with other factors can be of significance. The advanced development in wireless sensor networks can be used in monitoring various parameters in agriculture. Due to uneven natural distribution of rain water it is very difficult for farmers to monitor and control the distribution of water to agriculture field in the whole farm or as per the requirement of the crop. There is no ideal irrigation method for all weather conditions, soil structure and variety of crops cultures. Farmers suffer large financial losses because of wrong prediction of weather and incorrect irrigation methods. Sensors are the essential device for precision agricultural applications. In this paper we have detailed about how to utilize the sensors in crop field area and explained about Wireless Sensor Network (WSN), Zigbee network, Protocol stack, zigbee Applications and the results are given, when implemented the zigbee network experimentally in real time environment.", "title": "" }, { "docid": "2a7433cf92c8f845c951114eca8ce192", "text": "A through-dielectric switched-antenna-array radar imaging system is shown that produces near real-time imagery of targets on the opposite side of a lossy dielectric slab. This system operates at S-band, provides a frame rate of 0.5 Hz, and operates at a stand-off range of 6 m or greater. The antenna array synthesizes 44 effective phase centers in a linear array providing $\\lambda/2$ element-to-element spacing by time division multiplexing the radar's transmit and receive ports between 8 receive elements and 13 transmit elements, producing 2D (range vs. cross-range) imagery of what is behind a slab. Laboratory measurements agree with simulations, the air-slab interface is range gated out of the image, and target scenes consisting of cylinders and soda cans are imaged through the slab. A 2D model of a slab, a cylinder, and phase centers shows that blurring due to the slab and bistatic phase centers on the array is negligible when the radar sensor is located at stand-off ranges of 6 m or greater.", "title": "" }, { "docid": "ee473a0bb8b96249e61ad5e3925c11c2", "text": "Simple, short, and compact hashtags cover a wide range of information on social networks. Although many works in the field of natural language processing (NLP) have demonstrated the importance of hashtag recommendation, hashtag recommendation for images has barely been studied. In this paper, we introduce the HARRISON dataset, a benchmark on hashtag recommendation for real world images in social networks. The HARRISON dataset is a realistic dataset, composed of 57,383 photos from Instagram and an average of 4.5 associated hashtags for each photo. To evaluate our dataset, we design a baseline framework consisting of visual feature extractor based on convolutional neural network (CNN) and multi-label classifier based on neural network. Based on this framework, two single feature-based models, object-based and scene-based model, and an integrated model of them are evaluated on the HARRISON dataset. Our dataset shows that hashtag recommendation task requires a wide and contextual understanding of the situation conveyed in the image. As far as we know, this work is the first vision-only attempt at hashtag recommendation for real world images in social networks. We expect this benchmark to accelerate the advancement of hashtag recommendation.", "title": "" }, { "docid": "5f77e21de8f68cba79fc85e8c0e7725e", "text": "We introduce structured prediction energy networks (SPENs), a flexible framework for structured prediction. A deep architecture is used to define an energy function of candidate labels, and then predictions are produced by using backpropagation to iteratively optimize the energy with respect to the labels. This deep architecture captures dependencies between labels that would lead to intractable graphical models, and performs structure learning by automatically learning discriminative features of the structured output. One natural application of our technique is multi-label classification, which traditionally has required strict prior assumptions about the interactions between labels to ensure tractable learning and prediction problems. We are able to apply SPENs to multi-label problems with substantially larger label sets than previous applications of structured prediction, while modeling high-order interactions using minimal structural assumptions. Overall, deep learning provides remarkable tools for learning features of the inputs to a prediction problem, and this work extends these techniques to learning features of structured outputs. Our experiments provide impressive performance on a variety of benchmark multi-label classification tasks, demonstrate that our technique can be used to provide interpretable structure learning, and illuminate fundamental trade-offs between feed-forward and iterative structured prediction techniques.", "title": "" }, { "docid": "9058505c04c1dc7c33603fd8347312a0", "text": "Fear appeals are a polarizing issue, with proponents confident in their efficacy and opponents confident that they backfire. We present the results of a comprehensive meta-analysis investigating fear appeals' effectiveness for influencing attitudes, intentions, and behaviors. We tested predictions from a large number of theories, the majority of which have never been tested meta-analytically until now. Studies were included if they contained a treatment group exposed to a fear appeal, a valid comparison group, a manipulation of depicted fear, a measure of attitudes, intentions, or behaviors concerning the targeted risk or recommended solution, and adequate statistics to calculate effect sizes. The meta-analysis included 127 articles (9% unpublished) yielding 248 independent samples (NTotal = 27,372) collected from diverse populations. Results showed a positive effect of fear appeals on attitudes, intentions, and behaviors, with the average effect on a composite index being random-effects d = 0.29. Moderation analyses based on prominent fear appeal theories showed that the effectiveness of fear appeals increased when the message included efficacy statements, depicted high susceptibility and severity, recommended one-time only (vs. repeated) behaviors, and targeted audiences that included a larger percentage of female message recipients. Overall, we conclude that (a) fear appeals are effective at positively influencing attitude, intentions, and behaviors; (b) there are very few circumstances under which they are not effective; and (c) there are no identified circumstances under which they backfire and lead to undesirable outcomes.", "title": "" }, { "docid": "c0d3c14e792a02a9ad57745b31b84be6", "text": "INTRODUCTION\nCritically ill patients are characterized by increased loss of muscle mass, partially attributed to sepsis and multiple organ failure, as well as immobilization. Recent studies have shown that electrical muscle stimulation (EMS) may be an alternative to active exercise in chronic obstructive pulmonary disease (COPD) and chronic heart failure (CHF) patients with myopathy. The aim of our study was to investigate the EMS effects on muscle mass preservation of critically ill patients with the use of ultrasonography (US).\n\n\nMETHODS\nForty-nine critically ill patients (age: 59 +/- 21 years) with an APACHE II admission score >or=13 were randomly assigned after stratification upon admission to receive daily EMS sessions of both lower extremities (EMS-group) or to the control group (control group). Muscle mass was evaluated with US, by measuring the cross sectional diameter (CSD) of the vastus intermedius and the rectus femoris of the quadriceps muscle.\n\n\nRESULTS\nTwenty-six patients were finally evaluated. Right rectus femoris and right vastus intermedius CSD decreased in both groups (EMS group: from 1.42 +/- 0.48 to 1.31 +/- 0.45 cm, P = 0.001 control group: from 1.59 +/- 0.53 to 1.37 +/- 0.5 cm, P = 0.002; EMS group: from 0.91 +/- 0.39 to 0.81 +/- 0.38 cm, P = 0.001 control group: from 1.40 +/- 0.64 to 1.11 +/- 0.56 cm, P = 0.004, respectively). However, the CSD of the right rectus femoris decreased significantly less in the EMS group (-0.11 +/- 0.06 cm, -8 +/- 3.9%) as compared to the control group (-0.21 +/- 0.10 cm, -13.9 +/- 6.4%; P < 0.05) and the CSD of the right vastus intermedius decreased significantly less in the EMS group (-0.10 +/- 0.05 cm, -12.5 +/- 7.4%) as compared to the control group (-0.29 +/- 0.28 cm, -21.5 +/- 15.3%; P < 0.05).\n\n\nCONCLUSIONS\nEMS is well tolerated and seems to preserve the muscle mass of critically ill patients. The potential use of EMS as a preventive and rehabilitation tool in ICU patients with polyneuromyopathy needs to be further investigated.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov: NCT00882830.", "title": "" }, { "docid": "756929d22f107a5ff0b3bf0b19414a06", "text": "Users of social networking sites such as Facebook frequently post self-portraits on their profiles. While research has begun to analyze the motivations for posting such pictures, less is known about how selfies are evaluated by recipients. Although producers of selfies typically aim to create a positive impression, selfies may also be regarded as narcissistic and therefore fail to achieve the intended goal. The aim of this study is to examine the potentially ambivalent reception of selfies compared to photos taken by others based on the Brunswik lens model Brunswik (1956). In a between-subjects online experiment (N = 297), Facebook profile mockups were shown which differed with regard to picture type (selfie vs. photo taken by others), gender of the profile owner (female vs. male), and number of individuals within a picture (single person vs. group). Results revealed that selfies were indeed evaluated more negatively than photos taken by others. Persons in selfies were rated as less trustworthy, less socially attractive, less open to new experiences, more narcissistic and more extroverted than the same persons in photos taken by others. In addition, gender differences were observed in the perception of pictures. Male profile owners were rated as more narcissistic and less trustworthy than female profile owners, but there was no significant interaction effect of type of picture and gender. Moreover, a mediation analysis of presumed motives for posting selfies revealed that negative evaluations of selfie posting individuals were mainly driven by the perceived motivation of impression management. Findings suggest that selfies are likely to be evaluated less positively than producers of selfies might suppose.", "title": "" }, { "docid": "a6cf168632efb2a4c4a4d91c4161dc24", "text": "This paper presents a systematic approach to transform various fault models to a unified model such that all faults of interest can be handled in one ATPG run. The fault models that can be transformed include, but are not limited to, stuck-at faults, various types of bridging faults, and cell-internal faults. The unified model is the aggressor-victim type of bridging fault model. Two transformation methods, namely fault-based and pattern-based transformations, are developed for cell-external and cell-internal faults, respectively. With the proposed approach, one can use an ATPG tool for bridging faults to deal with the test generation problems of multiple fault models simultaneously. Hence the total test generation time can be reduced and highly compact test sets can be obtained. Experimental results show that on average 54.94% (16.45%) and 47.22% (17.51%) test pattern volume reductions are achieved compared to the method that deals with the three fault models separately without (with) fault dropping for ISCAS'89 andIWLS'05 circuits, respectively.", "title": "" }, { "docid": "6e46fd2a8370bc42d245ca128c9f537b", "text": "A literature review of the associations between involvement in bullying and depression is presented. Many studies have demonstrated a concurrent association between involvement in bullying and depression in adolescent population samples. Not only victims but also bullies display increased risk of depression, although not all studies have confirmed this for the bullies. Retrospective studies among adults support the notion that victimization is followed by depression. Prospective follow-up studies have suggested both that victimization from bullying may be a risk factor for depression and that depression may predispose adolescents to bullying. Research among clinically referred adolescents is scarce but suggests that correlations between victimization from bullying and depression are likely to be similar in clinical and population samples. Adolescents who bully present with elevated numbers of psychiatric symptoms and psychiatric and social welfare treatment contacts.", "title": "" } ]
scidocsrr
77b2421fa95cc8ca18ff6af2a5c75f70
Building Expression into Virtual Characters
[ { "docid": "ab1e4a8b0a4d00af488923ea52053aee", "text": "This paper describes Steve, an animated agent that helps students learn to perform physical, procedural tasks. The student and Steve cohabit a three-dimensional, simulated mock-up of the student's work environment. Steve can demonstrate how to perform tasks and can also monitor students while they practice tasks, providing assistance when needed. This paper describes Steve's architecture in detail, including perception, cognition, and motor control. The perception module monitors the state of the virtual world, maintains a coherent representation of it, and provides this information to the cognition and motor control modules. The cognition module interprets its perceptual input, chooses appropriate goals, constructs and executes plans to achieve those goals, and sends out motor commands. The motor control module implements these motor commands, controlling Steve's voice, locomotion, gaze, and gestures, and allowing Steve to manipulate objects in the virtual world.", "title": "" } ]
[ { "docid": "870159d500da7a415bac4ce6184c9556", "text": "We propose a versatile framework in which one can employ different machine learning algorithms to successfully distinguish between malware files and clean files, while aiming to minimise the number of false positives. In this paper we present the ideas behind our framework by working firstly with cascade one-sided perceptrons and secondly with cascade kernelized one-sided perceptrons. After having been successfully tested on medium-size datasets of malware and clean files, the ideas behind this framework were submitted to a scaling-up process that enable us to work with very large datasets of malware and clean files.", "title": "" }, { "docid": "b6e98db31f9090ac454ca0ebc93f0329", "text": "The ability of interval arithmetic to provide a finite (and succinct) way to represent uncertainty about a large, possibly uncountable, set of alternatives turns out to be useful in building “intelligent” autonomous agents. In particular, consider the two important issues of reasoning and sensing in intelligent control for autonomous agents. Developing a principled way to combine the two raises complicated issues in knowledge representation. In this paper we describe a solution to the problem. The idea is to incorporate interval arithmetic into the situation calculus. The situation calculus is a well known formalism for describing changing worlds using sorted first-order logic. It can also be used to describe how an agent’s knowledge of its world changes. Potentially, this provides a sound basis for incorporating sensing into logic programming. Previous work has relied on a possible worlds approach to knowledge. This leads to an elegant mathematical specification language. Unfortunately, there have been no proposals on how to implement the approach. This is because the number of possible worlds is potentially uncountable. We propose an alternative formalization of knowledge within the situation calculus. Our approach is based on intervals. The advantage is that it is straightforward to implement. Moreover, we can prove that it is sound and (sometimes) complete with respect to the previous possible worlds approach.", "title": "" }, { "docid": "786059412338001452e3d51da23347b4", "text": "Orthomode transducers (OMTs) using folded six-port waveguide junctions are presented for obtaining the maximum compactness in antenna feeds with very wide performance. The proposed junctions are based on the well-known turnstile and Bøifot configurations, having two symmetry planes for keeping the isolation between orthogonal polarizations and the control of higher order modes. The folded arms provide a combined effect: good matching with very significant size reduction, especially in the transversal plane, reducing mass in satellite systems and allowing feeding eventual dense horn antenna arrays. Two Ku-band OMTs are presented in order to illustrate the advantages of the introduced junctions. The first design covers the band 10.4–18.8 GHz (57.5%) with 24 dB return loss. The second OMT, for the 12.60–18.25 GHz band, has measured return loss better than 29 dB in the design band and insertion loss smaller than 0.11 dB for both polarizations.", "title": "" }, { "docid": "88d1062b03e96c8c50c6ee8923cb32da", "text": "On the one hand this paper presents a theoretical method to predict the responses for the parallel coupled microstrip bandpass filters, and on the other hand proposes a new MATLAB simulation interface including all parameters design procedure to predict the filter responses. The main advantage of this developed interface calculator is to enable researchers and engineers to design and determine easily all parameters of the PCMBPF responses with high accuracy and very small CPU time. To validate the numerical method and the corresponding new interface calculator, two PCMBP filters for wireless communications are designed and compared with the commercial electromagnetic CST simulator and the fabricated prototype respectively. Measured results show good agreement with those obtained by numerical method and simulations.", "title": "" }, { "docid": "a9621ae83268a372b2220030c4022a9e", "text": "A 15-50-GHz two-port quasi-optical scalar network analyzer consisting of a transmitter and receiver built in a planar technology is presented. The network analyzer is based on a Schottky-diode multiplier and mixer integrated inside a planar antenna and fed differentially by a coplanar waveguide transmission line. The antenna is placed on an extended hemispherical high-resistivity silicon substrate lens. The local oscillator signal is swept from 3 to 5 GHz and high-order harmonic mixing in both the up- and down-conversion mode is used to realize the RF bandwidth. The network analyzer has a dynamic range of >;50 dB in a 1-kHz bandwidth, and was successfully used to measure frequency-selective surfaces with f0=20, 30, and 40 GHz and a second-order bandpass response. Furthermore, the system was built with circuits and components for easy scaling to millimeter-wave frequencies, which is the primary motivation for this work.", "title": "" }, { "docid": "357ae5590fb6f11fbd210baced2fc4ee", "text": "To achieve the best results from an OCR system, the pre-processing steps must be performed with a high degree of accuracy and reliability. There are two critically important steps in the OCR pre-processing phase. First, blocks must be extracted from each page of the scanned document. Secondly, all blocks resulting from the first step must be arranged in the correct order. One of the most notable techniques for block ordering in the second step is the recursive x-y cut (RXYC) algorithm. This technique works accurately only when applied to documents with a simple page layout but it causes incorrect block ordering when applied to documents with complex page layouts. This paper proposes a modified recursive x-y cut algorithm for solving block ordering problems for documents with complex page layouts. This proposed algorithm can solve problems such as (1) the overlapping block problem; (2) the blocks overlay problem, and (3) the L-Shaped block problem.", "title": "" }, { "docid": "309a5105be37cbbae67619eac6874f12", "text": "PURPOSE\nTo conduct a systematic review of prospective studies assessing the association of vitamin D intake or blood levels of 25-hydroxyvitamin D [25(OH)D] with the risk of colorectal cancer using meta-analysis.\n\n\nMETHODS\nRelevant studies were identified by a search of MEDLINE and EMBASE databases before October 2010 with no restrictions. We included prospective studies that reported relative risk (RR) estimates with 95% CIs for the association between vitamin D intake or blood 25(OH)D levels and the risk of colorectal, colon, or rectal cancer. Approximately 1,000,000 participants from several countries were included in this analysis.\n\n\nRESULTS\nNine studies on vitamin D intake and nine studies on blood 25(OH)D levels were included in the meta-analysis. The pooled RRs of colorectal cancer for the highest versus lowest categories of vitamin D intake and blood 25(OH)D levels were 0.88 (95% CI, 0.80 to 0.96) and 0.67 (95% CI, 0.54 to 0.80), respectively. There was no heterogeneity among studies of vitamin D intake (P = .19) or among studies of blood 25(OH)D levels (P = .96). A 10 ng/mL increment in blood 25(OH)D level conferred an RR of 0.74 (95% CI, 0.63 to 0.89).\n\n\nCONCLUSION\nVitamin D intake and blood 25(OH)D levels were inversely associated with the risk of colorectal cancer in this meta-analysis.", "title": "" }, { "docid": "0dffca7979e72f7bb4b0fd94b031a46f", "text": "In collaborative filtering approaches, recommendations are inferred from user data. A large volume and a high data quality is essential for an accurate and precise recommender system. As consequence, companies are collecting large amounts of personal user data. Such data is often highly sensitive and ignoring users’ privacy concerns is no option. Companies address these concerns with several risk reduction strategies, but none of them is able to guarantee cryptographic secureness. To close that gap, the present paper proposes a novel recommender system using the advantages of blockchain-supported secure multiparty computation. A potential customer is able to allow a company to apply a recommendation algorithm without disclosing her personal data. Expected benefits are a reduction of fraud and misuse and a higher willingness to share personal data. An outlined experiment will compare users’ privacy-related behavior in the proposed recommender system with existent solutions.", "title": "" }, { "docid": "cde6d84d22ca9d8cd851f3067bc9b41e", "text": "The purpose of the present study was to examine the reciprocal relationships between authenticity and measures of life satisfaction and distress using a 2-wave panel study design. Data were collected from 232 college students attending 2 public universities. Structural equation modeling was used to analyze the data. The results of the cross-lagged panel analysis indicated that after controlling for temporal stability, initial authenticity (Time 1) predicted later distress and life satisfaction (Time 2). Specifically, higher levels of authenticity at Time 1 were associated with increased life satisfaction and decreased distress at Time 2. Neither distress nor life satisfaction at Time 1 significantly predicted authenticity at Time 2. However, the relationship between Time 1 distress and Time 2 authenticity was not significantly different from the relationship between Time 1 authenticity and Time 2 distress. Results are discussed in light of humanistic-existential theories and the empirical research on well-being.", "title": "" }, { "docid": "60e535675964e23fc3bac15aef49880e", "text": "In this paper, we propose <italic>DeepCut</italic>, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known <italic>GrabCut</italic> <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the <italic>DeepCut</italic> method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.", "title": "" }, { "docid": "90fe763855ca6c4fabe4f9d042d5c61a", "text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.", "title": "" }, { "docid": "df762ce22bd9135705ad2ecf57859260", "text": "We describe a model for creating word-to-word and phrase-to-phrase alignments between documents and their human written abstracts. Such alignments are critical for the development of statistical summarization systems that can be trained on large corpora of document/abstract pairs. Our model, which is based on a novel Phrase-Based HMM, outperforms both the Cut & Paste alignment model (Jing, 2002) and models developed in the context of machine translation (Brown et al., 1993).", "title": "" }, { "docid": "140e8946b1d44a09cb320d2db192b584", "text": "Die attach by low-temperature sintering of nanoparticles of silver is an emerging lead-free joining solution for electronic packaging because of the high thermal/electrical conductivity and high reliability of silver. For bonding small chips, the attachment can be achieved by a simple heating profile under atmospheric pressure. However, for bonding chips with an area , an external pressure of a few MPa is reported necessary at the sintering temperature of ~ 250 °C. This hot-pressing process in excess of 200 °C can add significant complexity and costs to manufacturing and maintenance. In this paper, we conduct a fractional factorial design of experiments aimed at lowering the temperature at which pressure is required for the die-attach process. In particular, we examine the feasibility of applying pressure only during the drying stage of the process when the temperature is still at 180 °C. The experiments help to identify the importance and interaction of various processing parameters, such as pressure, temperature, and time, on the bonding strength and microstructure of sintered nanosilver joints. In addition, the positive effect of pressure applied during drying on the bonding quality is observed. With the results, a simpler process, consisting of pressure drying at 180 °C under 3 MPa pressure, followed by sintering at 275 °C under atmospheric pressure, is found to produce attachments with die-shear strengths in excess of 30 MPa.", "title": "" }, { "docid": "641bc7bfd28f3df41dd0eaef0543832a", "text": "Monitoring parameters characterizing water quality, such as temperature, pH, and concentrations of heavy metals in natural waters, is often followed by transmitting the data to remote receivers using telemetry systems. Such systems are commonly powered by batteries, which can be inconvenient at times because batteries have a limited lifetime and must be recharged or replaced periodically to ensure that sufficient energy is available to power the electronics. To avoid these inconveniences, a microbial fuel cell was designed to power electrochemical sensors and small telemetry systems to transmit the data acquired by the sensors to remote receivers. The microbial fuel cell was combined with low-power, high-efficiency electronic circuitry providing a stable power source for wireless data transmission. To generate enough power for the telemetry system, energy produced by the microbial fuel cell was stored in a capacitor and used in short bursts when needed. Since commercial electronic circuits require a minimum 3.3 V input and our cell was able to deliver a maximum of 2.1 V, a DC-DC converter was used to boost the potential. The DC-DC converter powered a transmitter, which gathered the data from the sensor and transmitted it wirelessly to a remote receiver. To demonstrate the utility of the system, temporal variations in temperature were measured, and the data were wirelessly transmitted to a remote receiver.", "title": "" }, { "docid": "0bb53802df49097659ec2e9962ef4ede", "text": "In her 2006 book \"My Stroke of Insight\" Dr. Jill Bolte Taylor relates her experience of suffering from a left hemispheric stroke caused by a congenital arteriovenous malformation which led to a loss of inner speech. Her phenomenological account strongly suggests that this impairment produced a global self-awareness deficit as well as more specific dysfunctions related to corporeal awareness, sense of individuality, retrieval of autobiographical memories, and self-conscious emotions. These are examined in details and corroborated by numerous excerpts from Taylor's book.", "title": "" }, { "docid": "8172de8b6169b8cca687573991dde6e7", "text": "The number of home computer users is increasing faster than ever. Home users’ security should be an important research topic in IS security research, not only from the perspective of protecting home users’ personal or work information on their home computers, but also because hijacked home computers have become an ideal breading ground for hackers attacking organizations, and distributing illegal or morally questionable material. Despite the importance of studying home users’ security behaviour, the primary focus of the behavioural IS security research has been on an organizational context. While this research at an organizational context is important, we argue that the “home users” context require more attention by scholars. While there are similarities between “home users’ IS security behaviour” and “employees’ compliance with IS security procedures at organizational context”, it is necessary to understand their differences, to allow research and practice on “home users security behaviour” to develop further. We argue that previous research has not paid attention to such differences. As a first step in remedying the gap in our understanding, we first theorise these differences, we consider, that there are at least nine contextual factors that may result in an individual’s behaviour inconsistency in the workplace and home, and because of this, we argue that the same theories may not explain the use of security features in home and organizational contexts. Based on this conceptualization, we present a research agenda for studying home users’ security behaviour.", "title": "" }, { "docid": "2f46d27cc3f7fa696d18268950758d5c", "text": "Software language identification techniques are applicable to many situations from universal IDE support to legacy code analysis. Most widely used heuristics are based on software artefact metadata such as file extensions or on grammar-based text analysis such as keyword search. In this paper we propose to use statistical language models from the natural language processing field such as n-grams, skip-grams, multinominal naïve Bayes and normalised compression distance. Our preliminary experiments show that some of these models used as classifiers can achieve high precision and recall and can be used to properly identify language families, languages and even deal with embedded code fragments.", "title": "" }, { "docid": "0604c1ed7ea5a57387d013a5f94f8c00", "text": "Many current Internet services rely on inferences from models trained on user data. Commonly, both the training and inference tasks are carried out using cloud resources fed by personal data collected at scale from users. Holding and using such large collections of personal data in the cloud creates privacy risks to the data subjects, but is currently required for users to benefit from such services. We explore how to provide for model training and inference in a system where computation is pushed to the data in preference to moving data to the cloud, obviating many current privacy risks. Specifically, we take an initial model learnt from a small set of users and retrain it locally using data from a single user. We evaluate on two tasks: one supervised learning task, using a neural network to recognise users' current activity from accelerometer traces; and one unsupervised learning task, identifying topics in a large set of documents. In both cases the accuracy is improved. We also analyse the robustness of our approach against adversarial attacks, as well as its feasibility by presenting a performance evaluation on a representative resource-constrained device (a Raspberry Pi).", "title": "" }, { "docid": "d03adda25ea5415c241310f12bf50470", "text": "The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.", "title": "" } ]
scidocsrr
f1f30512a0efcda2d89c4d8e91549adf
Automatic pothole and speed breaker detection using android system
[ { "docid": "d4fc45837d85f3a03fa4bd76b45921a1", "text": "The importance of the road infrastructure for the society could be compared with importance of blood vessels for humans. To ensure road surface quality it should be monitored continuously and repaired as necessary. The optimal distribution of resources for road repairs is possible providing the availability of comprehensive and objective real time data about the state of the roads. Participatory sensing is a promising approach for such data collection. The paper is describing a mobile sensing system for road irregularity detection using Android OS based smart-phones. Selected data processing algorithms are discussed and their evaluation presented with true positive rate as high as 90% using real world data. The optimal parameters for the algorithms are determined as well as recommendations for their application.", "title": "" }, { "docid": "cf79cd1f110e2539697390e37e48b8d8", "text": "This paper investigates an application of mobile sensing: detecting and reporting the surface conditions of roads. We describe a system and associated algorithms to monitor this important civil infrastructure using a collection of sensor-equipped vehicles. This system, which we call the Pothole Patrol (P2), uses the inherent mobility of the participating vehicles, opportunistically gathering data from vibration and GPS sensors, and processing the data to assess road surface conditions. We have deployed P2 on 7 taxis running in the Boston area. Using a simple machine-learning approach, we show that we are able to identify potholes and other severe road surface anomalies from accelerometer data. Via careful selection of training data and signal features, we have been able to build a detector that misidentifies good road segments as having potholes less than 0.2% of the time. We evaluate our system on data from thousands of kilometers of taxi drives, and show that it can successfully detect a number of real potholes in and around the Boston area. After clustering to further reduce spurious detections, manual inspection of reported potholes shows that over 90% contain road anomalies in need of repair.", "title": "" } ]
[ { "docid": "af8717564020344dfc267eefbf818032", "text": "Increasingly, businesses are beginning to understand the profit potential of loyal customers (Oliver, 1999). Marketers endowed with such consumers can expect repeat patronage to remain high until competitors can find a way to: (1) close the gap in attitude among brands, (2) increase the differentiation of their own brand, or (3) encourage spurious loyalty from consumers (Dick & Basu, 1994). Loyalty leads to higher retention. According to one study, a 5% increase in customer retention rates increases profits by 25% to 95% (Reicheld & Schefter, 2000). It is thus heartening to note that “one of the most exciting and successful uses of [the Internet] ... may be the Internet’s role in building customer loyalty and maximizing sales to your existing customers” (Griffin, 1996, p. 50). Given its relative importance in cyberspace, it is surprising that relatively little has been done in conceptualizing and validating e-loyalty models (Luarn & Lin, 2003). Parasuraman and Grewal (2000) argue for more research pertaining to the influence of technology on customer responses, such as perceived value and customer loyalty. Besides customer trust, our study also incorporates two constructs—corporate image and perceived value—that have been poorly explored in online environments despite their recognized importance in off-line contexts. Consequently, a primary objective of this article is to discuss the impact of three constructs (i.e., customer trust, corporate image, and perceived value) on e-loyalty in a business-to-consumer (B2C) e-commerce context. In doing so, our model is expected to offer useful suggestions on how to manage customer trust, corporate image, and perceived value as online loyalty management tools. This article is generally divided into three sections. The first section will discuss the constructs of interest and clarify what they mean. In the second section, we will propose hypotheses explaining these relationships. And in the final section, we introduce actionable strategies for online loyalty management based on the proposed framework. BACKGROUND", "title": "" }, { "docid": "7acaddbb33af45e38631f5f3bb0efd92", "text": "Purpose – The purpose of this paper is to verify the relationship between switching costs and customer loyalty in e-commerce. Design/methodology/approach – The study conducted an empirical research. A total of 425 online shopping customers were invited from northeastern USA as samples. Findings – The findings show that switching costs positively influence customer loyalty. In addition, perceived risks will affect the relationship of switching costs and customer loyalty. For customers with low perceived risks, switching costs are also positively associated with customer loyalty. However, for customers with high perceived risks, the relationship of switching costs and customer loyalty is weak or negative. Research limitations/implications – One limitation is that mostly students were selected in the sample. The insights of this study can further validate the previous studies about the relationship of switching costs and customer loyalty, and suggest that perceived risks can be a moderating factor affecting this relationship. Practical implications – The study suggests that the practitioners should further understand the relationship among switching costs, perceived risks and customer loyalty for their customers. Originality/value – The paper contributes to the knowledge of perceived risks and how switching costs affect customer loyalty, particularly in e-commerce.", "title": "" }, { "docid": "223252b8bf99671eedd622c99bc99aaf", "text": "We present a novel dataset for natural language generation (NLG) in spoken dialogue systems which includes preceding context (user utterance) along with each system response to be generated, i.e., each pair of source meaning representation and target natural language paraphrase. We expect this to allow an NLG system to adapt (entrain) to the user’s way of speaking, thus creating more natural and potentially more successful responses. The dataset has been collected using crowdsourcing, with several stages to obtain natural user utterances and corresponding relevant, natural, and contextually bound system responses. The dataset is available for download under the Creative Commons 4.0 BY-SA license.", "title": "" }, { "docid": "1596859e5e6c8abf7d2ec578df1dd1a6", "text": "In this paper we introduce our new improved version of ant colony optimization/genetic algorithm hybrid for Sudoku puzzle solving. Sudoku is combinatorial number puzzle that had become worldwide phenomenon in the last decade. It has also become popular mathematical test problem in order to test new optimization ideas and algorithms for combinatorial problems. In this paper we present our new ideas for populations sorting and elitism rules in order to improve our earlier evolutionary algorithm based Sudoku solvers. Experimental results show that the new ideas significantly improved the speed of Sudoku solving.", "title": "" }, { "docid": "3b02d0f38e07d76dbf0016760a54cee8", "text": "BACKGROUND\nThe two step floating catchment area (2SFCA) method has emerged in the last decade as a key measure of spatial accessibility, particularly in its application to primary health care access. Many recent 'improvements' to the original 2SFCA method have been developed, which generally either account for distance-decay within a catchment or enable the usage of variable catchment sizes. This paper evaluates the effectiveness of various proposed methods within these two improvement groups. Moreover, its assessment focuses on how well these improvements operate within and between rural and metropolitan populations over large geographical regions.\n\n\nRESULTS\nDemonstrating these improvements to the whole state of Victoria, Australia, this paper presents the first comparison between continuous and zonal (step) decay functions and specifically their effect within both rural and metropolitan populations. Especially in metropolitan populations, the application of either type of distance-decay function is shown to be problematic by itself. Its inclusion necessitates the addition of a variable catchment size function which can enable the 2SFCA method to dynamically define more appropriate catchments which align with actual health service supply and utilisation.\n\n\nCONCLUSION\nThis study assesses recent 'improvements' to the 2SFCA when applied over large geographic regions of both large and small populations. Its findings demonstrate the necessary combination of both a distance-decay function and variable catchment size function in order for the 2SFCA to appropriately measure healthcare access across all geographical regions.", "title": "" }, { "docid": "91cb8726930e39db53814ceab69b7a50", "text": "Traditional methods for processing large images are extremely time intensive. Also, conventional image processing methods do not take advantage of available computing resources such as multicore central processing unit (CPU) and manycore general purpose graphics processing unit (GP-GPU). Studies suggest that applying parallel programming techniques to various image filters should improve the overall performance without compromising the existing resources. Recent studies also suggest that parallel implementation of image processing on compute unified device architecture (CUDA)-accelerated CPU/GPU system has potential to process the image very fast. In this paper, we introduce a CUDA-accelerated image processing method suitable for multicore/manycore systems. Using a bitmap file, we implement image processing and filtering through traditional sequential C and newly introduced parallel CUDA/C programs. A key step of the proposed algorithm is to load the pixel's bytes in a one dimensional array with length equal to matrix width * matrix height * bytes per pixel. This is done to process the image concurrently in parallel. According to experimental results, the proposed CUDA-accelerated parallel image processing algorithm provides benefit with a speedup factor up to 365 for an image with 8,192×8,192 pixels.", "title": "" }, { "docid": "13a1fc2a026a899379ca4f11ac6fdaf8", "text": "Recognizing objects from the point cloud captured by modern 3D sensors is an important task for robots operating autonomously in real-world environments. However, the existing well-performing approaches typically suffer from a trade-off between resolution of representation and computational efficiency. In this paper, raw point cloud normals are fed into the Point Convolution Network (PCN) without any other representation converts. The point cloud set disordered and unstructured problems are tackled by Kd-tree-based local permutation and spatial commutative pooling strategies proposed in this paper. Experiments on ModelNet illustrate that our method has two orders of magnitude less floating point computation in each non-linear mapping layer while it contributes to significant classification accuracy improvement. Compared to some of the state-of-the-art methods using the 3D volumetric image convolution, the PCN method also yields comparable classification accuracy.", "title": "" }, { "docid": "e324d34ba582466ddf21457e28981644", "text": "Writing was invented too recently to have influenced the human genome. Consequently, reading acquisition must rely on partial recycling of pre-existing brain systems. Prior fMRI evidence showed that in literates a left-hemispheric visual region increases its activation to written strings relative to illiterates and reduces its response to faces. Increasing literacy also leads to a stronger right-hemispheric lateralization for faces. Here, we evaluated whether this reorganization of the brain's face system has behavioral consequences for the processing of non-linguistic visual stimuli. Three groups of adult illiterates, ex-illiterates and literates were tested with the sequential composite face paradigm that evaluates the automaticity with which faces are processed as wholes. Illiterates were consistently more holistic than participants with reading experience in dealing with faces. A second experiment replicated this effect with both faces and houses. Brain reorganization induced by literacy seems to reduce the influence of automatic holistic processing of faces and houses by enabling the use of a more analytic and flexible processing strategy, at least when holistic processing is detrimental to the task.", "title": "" }, { "docid": "2d11f3c85fe7dee5c8af6ab7ab6caf28", "text": "As more and more cyber security incident data ranging from systems logs to vulnerability scan results are collected, manually analyzing these collected data to detect important cyber security events become impossible. Hence, data mining techniques are becoming an essential tool for real-world cyber security applications. For example, a report from Gartner [gartner12] claims that \"Information security is becoming a big data analytics problem, where massive amounts of data will be correlated, analyzed and mined for meaningful patterns\". Of course, data mining/analytics is a means to an end where the ultimate goal is to provide cyber security analysts with prioritized actionable insights derived from big data. This raises the question, can we directly apply existing techniques to cyber security applications? One of the most important differences between data mining for cyber security and many other data mining applications is the existence of malicious adversaries that continuously adapt their behavior to hide their actions and to make the data mining models ineffective. Unfortunately, traditional data mining techniques are insufficient to handle such adversarial problems directly. The adversaries adapt to the data miner's reactions, and data mining algorithms constructed based on a training dataset degrades quickly. To address these concerns, over the last couple of years new and novel data mining techniques which is more resilient to such adversarial behavior are being developed in machine learning and data mining community. We believe that lessons learned as a part of this research direction would be beneficial for cyber security researchers who are increasingly applying machine learning and data mining techniques in practice.\n To give an overview of recent developments in adversarial data mining, in this three hour long tutorial, we introduce the foundations, the techniques, and the applications of adversarial data mining to cyber security applications. We first introduce various approaches proposed in the past to defend against active adversaries, such as a minimax approach to minimize the worst case error through a zero-sum game. We then discuss a game theoretic framework to model the sequential actions of the adversary and the data miner, while both parties try to maximize their utilities. We also introduce a modified support vector machine method and a relevance vector machine method to defend against active adversaries. Intrusion detection and malware detection are two important application areas for adversarial data mining models that will be discussed in details during the tutorial. Finally, we discuss some practical guidelines on how to use adversarial data mining ideas in generic cyber security applications and how to leverage existing big data management tools for building data mining algorithms for cyber security.", "title": "" }, { "docid": "9e04e2d09e0b57a6af76ed522ede1154", "text": "The field of surveillance and forensics research is currently shifting focus and is now showing an ever increasing interest in the task of people reidentification. This is the task of assigning the same identifier to all instances of a particular individual captured in a series of images or videos, even after the occurrence of significant gaps over time or space. People reidentification can be a useful tool for people analysis in security as a data association method for long-term tracking in surveillance. However, current identification techniques being utilized present many difficulties and shortcomings. For instance, they rely solely on the exploitation of visual cues such as color, texture, and the object’s shape. Despite the many advances in this field, reidentification is still an open problem. This survey aims to tackle all the issues and challenging aspects of people reidentification while simultaneously describing the previously proposed solutions for the encountered problems. This begins with the first attempts of holistic descriptors and progresses to the more recently adopted 2D and 3D model-based approaches. The survey also includes an exhaustive treatise of all the aspects of people reidentification, including available datasets, evaluation metrics, and benchmarking.", "title": "" }, { "docid": "7f40c1a28ace6ed1421b1dde4112d08b", "text": "Find loads of the image mosaicing and super resolution book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.", "title": "" }, { "docid": "5d79bbe7176d0a28cdcce4f34237dad3", "text": "Various case studies in different application domains have shown the great potential of visual parameter space analysis to support validating and using simulation models. In order to guide and systematize research endeavors in this area, we provide a conceptual framework for visual parameter space analysis problems. The framework is based on our own experience and a structured analysis of the visualization literature. It contains three major components: (1) a data flow model that helps to abstractly describe visual parameter space analysis problems independent of their application domain; (2) a set of four navigation strategies of how parameter space analysis can be supported by visualization tools; and (3) a characterization of six analysis tasks. Based on our framework, we analyze and classify the current body of literature, and identify three open research gaps in visual parameter space analysis. The framework and its discussion are meant to support visualization designers and researchers in characterizing parameter space analysis problems and to guide their design and evaluation processes.", "title": "" }, { "docid": "e4cba1a4ebef9fa18c3ee11258160a8b", "text": "Subocclusive hymenal variants, such as microperforate or septate hymen, impair somatic functions (e.g., vaginal intercourse or menstrual hygiene) and can negatively impact the quality of life of young women. We know little about the prevalence and inheritance of subocclusive hymenal variants. So far, eight cases of familial occurrence of occlusive hymenal anomalies (imperforate hymen) have been reported. In one of these cases, monozygotic twins were affected. We are reporting the first case of subocclusive hymenal variants (microperforate hymen and septate hymen) in 16-year-old white dizygotic twins. In addition, we review and discuss the current evidence. Conclusion: The mode of inheritance of hymenal variants has not been determined so far. Because surgical corrections of hymenal variants should be carried out in asymptomatic patients (before menarche), gynecologists and pediatricians should keep in mind that familial occurrences may occur.", "title": "" }, { "docid": "ec6352845fdd14d6d4844604a01a5613", "text": "Single loop detectors provide the most abundant source of traffic data in California, but loop data samples are often missing or invalid. We describe a method that detects bad data samples and imputes missing or bad samples to form a complete grid of ‘clean data’, in real time. The diagnostics algorithm and the imputation algorithm that implement this method are operational on 14,871 loops in six Districts of the California Department of Transportation. The diagnostics algorithm detects bad (malfunctioning) single loop detectors from their volume and occupancy measurements. Its novelty is its use of time series of many samples, rather than basing decisions on single samples, as in previous approaches. The imputation algorithm models the relationship between neighboring loops as linear, and uses linear regression to estimate the value of missing or bad samples. This gives a better estimate than previous methods because it uses historical data to learn how pairs of neighboring loops behave. Detection of bad loops and imputation of loop data are important because they allow algorithms that use loop data to perform analysis without requiring them to compensate for missing or incorrect data samples. Chen/Kwon/Skabardonis/Varaiya 3 INTRODUCTION Loop detectors are the best source of real time freeway traffic data today. In California, these detectors cover most urban freeways. Loop data provide a powerful means to study and monitor traffic (2). But the data contain many holes (missing values) or bad (incorrect) values and require careful ‘cleaning’ to produce reliable results. Bad or missing samples present problems for any algorithm that uses the data for analysis. Therefore, we need both to detect when data are bad and throw them out, and to ‘fill’ holes in the data with imputed values. The goal is to produce a complete grid of reliable data. We can trust analyses that use such a complete data set. We need to detect bad data from the measurements themselves. The problem was studied by the FHWA, Washington DOT, and others. Existing algorithms usually work on the raw 20-second or 30second data, and produce a diagnosis for each sample. But it’s very hard to tell if a single 20-second sample is good or bad unless it’s very abnormal. Fortunately, loop detectors don’t just give random errors—some loops produce reasonable data all the time, while others produce suspect data all the time. By examining a time series of measurements one can readily distinguish bad behavior from good. Our diagnostics algorithm examines a day’s worth of samples together, producing convincing results. Once bad samples are thrown out, the resulting holes in the data must be filled with imputed values. Imputation using time series analysis has been suggested before, but these imputations are only effective for short periods of missing data; linear interpolation and neighborhood averages are natural imputation methods, but they don’t use all the relevant data that are available. Our imputation algorithm estimates values at a detector using data from its neighbors. The algorithm models each pair of neighbors linearly, and fits its parameters on historical data. It is robust, and performs better than other methods. We first describe the data and types of errors that are observed. We then survey current methods of error detection, which operate on single 20-second samples. Then we present our diagnostic algorithm, and show that it performs better. We then present our imputation algorithm, and show that this method is better than other imputation methods such as linear interpolation. DESCRIPTION OF DATA The freeway Performance Measurement System (PeMS) (1,2) collects, stores, and analyzes data from thousands of loop detectors in six districts of the California Department of Transportation (Caltrans). The PeMS database currently has 1 terabyte of data online, and collects more than 1GB per day. PeMS uses the data to compute freeway usage and congestion delays, measure and predict travel time, evaluate ramp-metering methods, and validate traffic theories. There are 14,871 main line (ML) loops in the PeMS database from six Caltrans districts. The results presented here are for main line loops. Each loop reports the volume q(t)—the number of vehicles crossing the loop detector during a 30-second time interval t, and occupancy k(t)—the fraction of this interval during which there is a vehicle above the loop. We call each pair of volume and occupancy observations a sample. The number of total possible samples in one day from ML loops in PeMS is therefore (14871 loops) x (2880 sample per loop per day) = 42 million samples. In reality, however, PeMS never receives all the samples. For example, Los Angeles has a missing sample rate of about 15%. While it’s clear when we miss samples, it’s harder to tell when a received sample is bad or incorrect. A diagnostics test needs to accept or reject samples based on our assumption of what good and bad samples look like. Chen/Kwon/Skabardonis/Varaiya 4 EXISTING DATA RELIABILITY TESTS Loop data error has plagued their effective use for a long time. In 1976, Payne (3) identified five types of detector errors and presented several methods to detect them from 20-second and 5-minute volume and occupancy measurements. These methods place thresholds on minimum and maximum flow, density, and speed, and declare a sample to be invalid if they fail any of the tests. Later, Jacobsen and Nihan at the University of Washington defined an ‘acceptable region’ in the k-q plane, and declared samples to be good only if they fell inside the region (4). We call this the Washington Algorithm. The boundaries of the acceptable region are defined by a set of parameters, which are calibrated from historical data, or derived from traffic theory. Existing detection algorithms (3,4,5) try to catch the errors described in (3). For example, ‘chattering’ and ‘pulse break up’ cause q to be high, so a threshold on q can catch these errors. But some errors cannot be caught this way, such as a detector stuck in the ‘off’ (q=0, k=0) position. Payne’s algorithm would identify this as a bad point, but good detectors will also report (0,0) when there are no vehicles in the detection period. Eliminating all (0,0) points introduces a positive bias in the data. On the other hand, the Washington Algorithm accepts the (0,0) point, but doing so makes it unable to detect the ‘stuck’ type of error. A threshold on occupancy is similarly hard to set. An occupancy value of 0.5 for one 30-second period should not indicate an error, but a large number of 30-second samples with occupancies of 0.5, especially during non-rush hours, points to a malfunction. We implemented the Washington Algorithm in Matlab and tested it on 30-second data from 2 loops in Los Angeles, for one day. The acceptable region is taken from (4). The data and their diagnoses are shown in Figure 1. Visually, loop 1 looks good (Figure 1b), and loop 2 looks bad (Figure 1d). Loop 2 looks bad because there are many samples with k=70% and q=0, as well as many samples with occupancies that appear too high, even during non-rush hours, and when loop 1 shows low occupancy. The Washington Algorithm, however, does not make the correct diagnosis. Out of 2875 samples, it declared 1138 samples to be bad for loop 1 and 883 bad for loop 2. In both loops, there were many false alarms. This is because the maximum acceptable slope of q/k was exceeded by many samples in free flow. This suggests that the algorithm is very sensitive to thresholds and needs to be calibrated for California. Calibration is impractical because each loop will need a separate acceptable region, and ground truth would be difficult to get. There are also false negatives–many samples from loop 2 appear to be bad because they have high occupancies during off peak times, but they were not detected by the Washington Algorithm. This illustrates a difficulty with the threshold method—the acceptable region has to be very large, because there are many possible traffic states within a 30-second period. On the other hand, a lot more information can be gained by looking at how a detector behaves over many sample times. This is why we easily recognize loop 1 to be good and loop 2 to be bad by looking at their k(t) plots, and this is a key insight that led to our diagnostics algorithm. PROPOSED DETECTOR DIAGNOSTICS ALGORITHM Design The algorithm for loop error detection uses the time series of flow and occupancy measurements, instead of making a decision based on an individual sample. It is based on the empirical observation that good and bad detectors behave very differently over time. For example, at any given instant, the flow and occupancy at a detector location can have a wide range of values, and one cannot rule most Chen/Kwon/Skabardonis/Varaiya 5 of them out; but over a day, most detectors show a similar pattern—flow and occupancy are high in the rush hours and low late at night. Figure 2a and 2b show typical 30-second flow and occupancy measurements. Most loops have outputs that look like this, but some loops behave very differently. Figure 2c and 2d show an example of a bad loop. This loop has zero flow and an occupancy value of 0.7 for several hours during the evening rush hour—clearly, these values must be incorrect. We found 4 types of abnormal time series behavior, and list them in Table 1. Types 1 and 4 are selfexplanatory; types 2 and 3 are illustrated in Figure 2c, 2d, and Figure 1b. The errors in Table 1 are not mutually exclusive. For example, a loop with all zero occupancy values exhibits both type 1 and type 4 errors. A loop is declared bad if it is in any of these categories. We did not find a significant number of loops that have chatter or pulse break up, which would produce abnormally high volumes. Therefore the current form of the detection algorithm does not check for this condition. However, a fifth error type and error check can easi", "title": "" }, { "docid": "4fea6fb309d496f9b4fd281c80a8eed7", "text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.", "title": "" }, { "docid": "b205dd971c6fb240b5fc85e9c3ee80a9", "text": "Network embedding leverages the node proximity manifested to learn a low-dimensional node vector representation for each node in the network. The learned embeddings could advance various learning tasks such as node classification, network clustering, and link prediction. Most, if not all, of the existing works, are overwhelmingly performed in the context of plain and static networks. Nonetheless, in reality, network structure often evolves over time with addition/deletion of links and nodes. Also, a vast majority of real-world networks are associated with a rich set of node attributes, and their attribute values are also naturally changing, with the emerging of new content patterns and the fading of old content patterns. These changing characteristics motivate us to seek an effective embedding representation to capture network and attribute evolving patterns, which is of fundamental importance for learning in a dynamic environment. To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly. In this paper, we tackle this problem by proposing a novel dynamic attributed network embedding framework - DANE. In particular, DANE first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner. We perform extensive experiments on both synthetic and real attributed networks to corroborate the effectiveness and efficiency of the proposed framework.", "title": "" }, { "docid": "2f2e5d62475918dc9cfd54522f480a11", "text": "In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.", "title": "" }, { "docid": "1981aa894ee84501115a31f1a602e236", "text": "Introduction: Vascular abnormalities are relatively uncommon lesions, but head and neck is a common region for vascular malformation which is classified as benign tumors. In this paper, the authors report a rare presentation of vascular malformation in the tongue and its managements. Case Report: An 18 months 2 old child presented with a giant mass of tongue which caused functional and aesthetic problem. The rapid growth pattern of cavernous hemangioma was refractory to corticosteroid. The lesion was excised without any complication. Since the mass was so huge that not only filled entire oral cavity but was protruding outside, airway management was a great challenge for anesthesia plan and at the same time surgical technique was difficult to select. Conclusion: Despite different recommended modalities in managing hemangiomas of the tongue, in cases of huge malformations, surgery could be the mainstay treatment and provided that critical care measures are taken in to account, could be performed very safely.", "title": "" }, { "docid": "f0fa3b62c04032a7bf9af44d279036dc", "text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.02.080 * Corresponding author. Tel.: +386 15892467; fax: E-mail addresses: miha.skerlavaj@ef.uni-lj.si (M. Šk edu (J.H. Song), ymlee@sm.ac.kr (Y. Lee). URL: http://www.mihaskerlavaj.net (M. Škerlavaj) 1 Tel.: +1 4057443613. 2 Tel.: +82 220777050. The aim of this paper is to present and test a model of innovativeness improvement based on the impact of organizational learning culture. The concept of organizational learning culture (OLC) is presented and defined as a set of norms and values about the functioning of an organization. They should support systematic, in-depth approaches aimed at achieving higher-level organizational learning. The elements of an organizational learning process that we use are information acquisition, information interpretation, and behavioral and cognitive changes. Within the competing values framework OLC covers some aspects of all four different types of cultures: group, developmental, hierarchical, and rational. Constructs comprising innovativeness are innovative culture and innovations, which are made of technical (product and service) and administrative (process) innovations. We use data from 201 Korean companies employing more than 50 people. The impact of OLC on innovations empirically tested via structural equation modeling (SEM). The results show that OLC has a very strong positive direct effect on innovations as well as moderate positive indirect impact via innovative culture. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "75a1dba24f2b98904423b5db7c3b9df7", "text": "INTRODUCTION\nShort-wavelengths can have an acute impact on alertness, which is allegedly due to their action on intrinsically photosensitive retinal ganglion cells. Classical photoreceptors cannot, however, be excluded at this point in time as contributors to the alerting effect of light. The objective of this study was to compare the alerting effect at night of a white LED light source while wearing blue-blockers or not, in order to establish the contribution of short-wavelengths.\n\n\nMATERIALS AND METHODS\n20 participants stayed awake under dim light (< 5 lx) from 23:00 h to 04:00 h on two consecutive nights. On the second night, participants were randomly assigned to one light condition for 30 min starting at 3:00 h. Group A (5M/5F) was exposed to 500 μW/cm(2) of unfiltered LED light, while group B (4M/6F) was required to wear blue-blocking glasses, while exposed to 1500 μW/cm(2) from the same light device in order to achieve 500 μW/cm(2) at eye level (as measured behind the glasses). Subjective alertness, energy, mood and anxiety were assessed for both nights at 23:30 h, 01:30 h and 03:30 h using a visual analog scale (VAS). Subjective sleepiness was assessed with the Stanford Sleepiness Scale (SSS). Subjects also performed the Conners' Continuous Performance Test II (CPT-II) in order to assess objective alertness. Mixed model analysis was used to compare VAS, SSS and CPT-II parameters.\n\n\nRESULTS\nNo difference between group A and group B was observed for subjective alertness, energy, mood, anxiety and sleepiness, as well as CPT-II parameters. Subjective alertness (p < 0.001), energy (p < 0.001) and sleepiness (p < 0.05) were, however improved after light exposure on the second night independently of the light condition.\n\n\nCONCLUSIONS\nThe current study shows that when sleepiness is high, the alerting effect of light can still be triggered at night in the absence of short-wavelengths with a 30 minute light pulse of 500 μW/cm(2). This suggests that the underlying mechanism by which a brief polychromatic light exposure improves alertness is not solely due to short-wavelengths through intrinsically photosensitive retinal ganglion cells.", "title": "" } ]
scidocsrr
c9ca661691215cfb3b2f5b06373c1e71
BotOrNot: A System to Evaluate Social Bots
[ { "docid": "c051db66356f7c9aaf6c2f27d9275dc3", "text": "Online Social Networks (OSNs) have attracted millions of active users and have become an integral part of today’s Web ecosystem. Unfortunately, in the wrong hands, OSNs can be used to harvest private user data, distribute malware, control botnets, perform surveillance, spread misinformation, and even influence algorithmic trading. Usually, an adversary starts off by running an infiltration campaign using hijacked or adversary-owned OSN accounts, with an objective to connect with a large number of users in the targeted OSN. In this article, we evaluate how vulnerable OSNs are to a large-scale infiltration campaign run by socialbots: bots that control OSN accounts and mimic the actions of real users. We adopted the design of a traditional web-based botnet and built a prototype of a Socialbot Network (SbN): a group of coordinated programmable socialbots. We operated our prototype on Facebook for eight weeks, and collected data about user behavior in response to a large-scale infiltration campaign. Our results show that (1) by exploiting known social behaviors of users, OSNs such as Facebook can be infiltrated with a success rate of up to 80%, (2) subject to user profile privacy settings, a successful infiltration can result in privacy breaches where even more private user data are exposed, (3) given the economics of today’s underground markets, running a large-scale infiltration campaign might be profitable but is still not particularly attractive as a sustainable and independent business, (4) the security of socially-aware systems that use or integrate OSN platforms can be at risk, given the infiltration capability of an adversary in OSNs, and (5) defending against malicious socialbots raises a set of challenges that relate to web automation, online-offline identity binding, and usable security.", "title": "" } ]
[ { "docid": "afe26c28b56a511452096bfc211aed97", "text": "System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly Object Constraint Language (OCL) expressions across all these artifacts. Our goal here is to support the derivation of functional system test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this context, we address testability and automation issues, as the ultimate goal is to fully support system testing activities with high-capability tools.", "title": "" }, { "docid": "d49d099d3f560584f2d080e7a1e2711f", "text": "Dark Web forums are heavily used by extremist and terrorist groups for communication, recruiting, ideology sharing, and radicalization. These forums often have relevance to the Iraqi insurgency or Al-Qaeda and are of interest to security and intelligence organizations. This paper presents an automated approach to sentiment and affect analysis of selected radical international Ahadist Dark Web forums. The approach incorporates a rich textual feature representation and machine learning techniques to identify and measure the sentiment polarities and affect intensities expressed in forum communications. The results of sentiment and affect analysis performed on two large-scale Dark Web forums are presented, offering insight into the communities and participants.", "title": "" }, { "docid": "abdc80a5e567ded6d20b9a00ce1030f7", "text": "OBJECTIVE\nThere is increasing recognition that autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) are associated with significant costs and burdens. However, research on their impact has focused mostly on the caregivers of young children; few studies have examined caregiver burden as children transition into adolescence and young adulthood, and no one has compared the impact of ASD to other neurodevelopmental disorders (e.g., ADHD).\n\n\nMETHOD\nWe conducted an observational study of 192 families caring for a young person (aged 14 to 24 years) with a childhood diagnosis of ASD or ADHD (n = 101 and n = 91, respectively) in the United Kingdom. A modified stress-appraisal model was used to investigate the correlates of caregiver burden as a function of family background (parental education), primary stressors (symptoms), primary appraisal (need), and resources (use of services).\n\n\nRESULTS\nBoth disorders were associated with a high level of caregiver burden, but it was significantly greater in ASD. In both groups, caregiver burden was mainly explained by the affected young person's unmet need. Domains of unmet need most associated with caregiver burden in both groups included depression/anxiety and inappropriate behavior. Specific to ASD were significant associations between burden and unmet needs in domains such as social relationships and major mental health problems.\n\n\nCONCLUSIONS\nAdolescence and young adulthood are associated with high levels of caregiver burden in both disorders; in ASD, the level is comparable to that reported by persons caring for individuals with a brain injury. Interventions are required to reduce caregiver burden in this population.", "title": "" }, { "docid": "e4c879dffd5be1111573c7951ce9f8cd", "text": "Many algorithms have been implemented to the problem of Automatic Text Categorization (ATC). Most of the work in this area has been carried out on English texts, with only a few researchers addressing Arabic texts. We have investigated the use of the K-Nearest Neighbour (K-NN) classifier, with an Inew, cosine, jaccard and dice similarities, in order to enhance Arabic ATC. We represent the dataset as un-stemmed and stemmed data; with the use of TREC-2002, in order to remove prefixes and suffixes. However, for statistical text representation, Bag-Of-Words (BOW) and character-level 3 (3-Gram) were used. In order to, reduce the dimensionality of feature space; we used several feature selection methods. Experiments conducted with Arabic text showed that the K-NN classifier, with the new method similarity Inew 92.6% Macro-F1, had better performance than the K-NN classifier with cosine, jaccard and dice similarities. Chi-square feature selection, with representation by BOW, led to the best performance over other feature selection methods using BOW and 3-Gram.", "title": "" }, { "docid": "7c98ac06ea8cb9b83673a9c300fb6f4c", "text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.", "title": "" }, { "docid": "96f2e93e188046fa1d97cedc51b07808", "text": "The development of next-generation electrical link technology to support 400Gb/s standards is underway [1-5]. Physical constraints paired to the small area available to dissipate heat, impose limits to the maximum number of serial interfaces and therefore their minimum speed. As such, aggregation of currently available 25Gb/s systems is not an option, and the migration path requires serial interfaces to operate at increased rates. According to CEI-56G and IEEE P802.3bs emerging standards, PAM-4 signaling paired to forward error correction (FEC) schemes is enabling several interconnect applications and low-loss profiles [1]. Since the amplitude of each eye is reduced by a factor of 3, while noise power is only halved, a high transmitter (TX) output amplitude is key to preserve high SNR. However, compared to NRZ, the design of a PAM-4 TX is challenged by tight linearity constraints, required to minimize the amplitude distortion among the 4 levels [1]. In principle, current-mode (CM) drivers can deliver a differential peak-to-peak swing up to 4/3(VDD-VOV), but they struggle to generate high-swing PAM-4 levels with the required linearity. This is confirmed by recently published CM PAM-4 drivers, showing limited output swings even with VDD raised to 1.5V [2-4]. Source-series terminated (SST) drivers naturally feature better linearity and represent a valid alternative, but the maximum differential peak-to-peak swing is bounded to VDD only. In [5], a dual-mode SST driver supporting NRZ/PAM-4 was presented, but without FFE for PAM-4 mode. In this paper, we present a PAM-4 transmitter leveraging a hybrid combination of SST and CM driver. The CM part enhances the output swing by 30% beyond the theoretical limit of a conventional SST implementation, while being calibrated to maintain the desired linearity level. A 5b 4-tap FIR filter, where equalization tuning can be controlled independently from output matching, is also embedded. The transmitter, implemented in 28nm CMOS FDSOI, incorporates a half-rate serializer, duty-cycle correction (DCC), ≫2kV HBM ESD diodes, and delivers a full swing of 1.3Vppd at 45Gb/s while drawing 120mA from a 1V supply. The power efficiency is ~2 times better than those compared in this paper.", "title": "" }, { "docid": "bd30e7918a0187ff3d01d3653258bf27", "text": "Recursive neural network is one of the most successful deep learning models for natural language processing due to the compositional nature of text. The model recursively composes the vector of a parent phrase from those of child words or phrases, with a key component named composition function. Although a variety of composition functions have been proposed, the syntactic information has not been fully encoded in the composition process. We propose two models, Tag Guided RNN (TGRNN for short) which chooses a composition function according to the part-ofspeech tag of a phrase, and Tag Embedded RNN/RNTN (TE-RNN/RNTN for short) which learns tag embeddings and then combines tag and word embeddings together. In the fine-grained sentiment classification, experiment results show the proposed models obtain remarkable improvement: TG-RNN/TE-RNN obtain remarkable improvement over baselines, TE-RNTN obtains the second best result among all the top performing models, and all the proposed models have much less parameters/complexity than their counterparts.", "title": "" }, { "docid": "aa23e075bbd0f87ae8a8a9eadae4e697", "text": "Mammogram classification is directly related to computer-aided diagnosis of breast cancer. Traditional methods requires great effort to annotate the training data by costly manual labeling and specialized computational models to detect these annotations during test. Inspired by the success of using deep convolutional features for natural image analysis and multi-instance learning for labeling a set of instances/patches, we propose end-to-end trained deep multiinstance networks for mass classification based on whole mammogram without the aforementioned costly need to annotate the training data. We explore three different schemes to construct deep multi-instance networks for whole mammogram classification. Experimental results on the INbreast dataset demonstrate the robustness of proposed deep networks compared to previous work using segmentation and detection annotations in the training.", "title": "" }, { "docid": "0201a5f0da2430ec392284938d4c8833", "text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.", "title": "" }, { "docid": "5549b770dd97c58e6bc5fc18b316e0e4", "text": "Due to its rapid speed of information spread, wide user bases, and extreme mobility, Twitter is drawing attention as a potential emergency reporting tool under extreme events. However, at the same time, Twitter is sometimes despised as a citizen based non-professional social medium for propagating misinformation, rumors, and, in extreme case, propaganda. This study explores the working dynamics of the rumor mill by analyzing Twitter data of the Haiti Earthquake in 2010. For this analysis, two key variables of anxiety and informational uncertainty are derived from rumor theory, and their interactive dynamics are measured by both quantitative and qualitative methods. Our research finds that information with credible sources contribute to suppress the level of anxiety in Twitter community, which leads to rumor control and high information quality.", "title": "" }, { "docid": "af9824a336bb1173f500cd4b976640b5", "text": "The ever-increasing volume of spatial data has greatly challenged our ability to extract useful but implicit knowledge from them. As an important branch of spatial data mining, spatial outlier detection aims to discover the objects whose non-spatial attribute values are significantly different from the values of their spatial neighbors. These objects, called spatial outliers, may reveal important phenomena in a number of applications including traffic control, satellite image analysis, weather forecast, and medical diagnosis. Most of the existing spatial outlier detection algorithms mainly focus on identifying single attribute outliers and could potentially misclassify normal objects as outliers when their neighborhoods contain real spatial outliers with very large or small attribute values. In addition, many spatial applications contain multiple non-spatial attributes which should be processed altogether to identify outliers. To address these two issues, we formulate the spatial outlier detection problem in a general way, design two robust detection algorithms, one for single attribute and the other for multiple attributes, and analyze their computational complexities. Experiments were conducted on a real-world data set, West Nile virus data, to validate the effectiveness of the proposed algorithms.", "title": "" }, { "docid": "5e105c819b88d1fdfe34c4fa8bf480ba", "text": "In this paper, we propose a real-time image superpixel segmentation method with 50 frames/s by using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. In order to decrease the computational costs of superpixel algorithms, we adopt a fast two-step framework. In the first clustering stage, the DBSCAN algorithm with color-similarity and geometric restrictions is used to rapidly cluster the pixels, and then, small clusters are merged into superpixels by their neighborhood through a distance measurement defined by color and spatial features in the second merging stage. A robust and simple distance function is defined for obtaining better superpixels in these two steps. The experimental results demonstrate that our real-time superpixel algorithm (50 frames/s) by the DBSCAN clustering outperforms the state-of-the-art superpixel segmentation methods in terms of both accuracy and efficiency.", "title": "" }, { "docid": "57e95050bcaf50fdb6c7a5390382a1b7", "text": "We compare our own embodied conversational agent (ECA) scheme, BotCom, with seven other complex Internet-based ECAs according to recentlypublished information about them, and highlight some important attributes that have received little attention in the construction of realistic ECAs. BotCom incorporates the use of emotions, humor and complex information services. We cover issues that are likely to be of greatest interest for developers of ECAs that, like BotCom, are directed towards intensive commercial use. 1 Using ECAs on the Internet Many embodied conversational agents (ECAs) are targeting the Internet. However, systems that are bound to this global network not only benefit from several advantages of the huge amount of accessible information provided by this medium, but inherit its common problems as well. Among those are the difficulty of relevant search, complexity of available information, unstructuredness, bandwidth limitations etc. So, what are the main arguments in favor of deploying an ECA on the Internet? First of all, the preference for real-time events, real-time information flow, expresses an innate need of mankind. Internet ECAs have this advantage as opposed to any other on-line customer-company communication method, such as web pages, email, guest books, etc. In addition, secondary orality, the communication by dialogues as opposed to monologues, is also far more effective when dealing with humans [5]. Furthemore, even though ECAs and simpler chatterbots may give wrong answers to certain questions, they create some sort of representation of themselves in the customers mind [13]. An ordinary website can be considered not only less interactive than one with an ECA, but the way it operates is closer to monologues than to dialogues. We have developed “BotCom”, a fully working prototype system, as part of a research project. It is capable of chatting with users about different topics as well as displaying synchronized affective feedback based on a complex emotional state generator, GALA. Moreover, it has a feature of connecting to various information T. Rist et al. (Eds.): IVA 2003, LNAI 2792, pp. 5-12, 2003.  Springer-Verlag Berlin Heidelberg 2003 6 Gábor Tatai et al. sources and search engines thus enabling an easily scalable knowledge base. Its primary use will be interactive website navigation, entertainment, marketing and education. BotCom is currently being introduced into commercial use. There is no space to discuss all the features and interesting implementation experiences with our BotCom ECA in this paper. Therefore we focus on some highlights where, we think, our ECA is special or when a theoretical or practical observation has proved to be particularly useful, so that others might benefit from these as well. 2 Comparison of Popular Internet Chatterbots During design and implementation we have analyzed, evaluated and constantly monitored existing ECAs in order to reinforce and validate our development approach. We did not follow only one methodology; several of them ([2], [4], [12], [15]) served as a basis of our own compound method, as, in spite of the similarities, overlaps frequently occurred and all of them contained unique evaluation variables. We studied the following (either commercial or award-wining) chatbots (see Table 1. for the results): Ultra Hal Assistant 4.5 (Zabaware, Inc., http://www.zabaware.com/assistant) Ramona (KurzweiAI.net, http://www.kurzweilai.net/) Elbot (Kiwilogic, http://www.elbot.com/) Ella (Kevin L. Copple, http://www.ellaz.com/EllaASPLoebner/Loebner2002.aspx) Nicole (NativeMinds, http://an1-sj.nativeminds.com/demos_default.html) Lucy (Artificial Life, http://www.artificial-life.com/v5/website.php) Julia (Conversive, http://www.vperson.com) 2.1 Visual Appearance In most cases visualization is typically solved by 2D graphics focusing only on the face, or photo-realistic schemes of still pictures (photos). Some tend to limit animation to only certain parts of the body (e.g. eyes, lips, eye-brows, chin), the roles of which are considered to be important in communication [11]. 3D animations are also applied occasionally, for instance in Lucy’s case. Despite the more lifelike and realistic appearance of 3D real-time rendered graphics, there is no underpinning evidence of differences in expressiveness amongst cartoons, photos, movies etc., though various studies confirm that users assume high-quality animated ECAs to be more intelligent [15]. Aiko, a female instance of BotCom, runs on the users’ web interface. The representation of her reactions and emotions is implemented through a 3D pre-processed (pre-rendered), realistic animation. Since the face and gestures provide the significant secondary communication channels [8], only the head, the torso (shoulders, arms) and occasionally the hands were visualized. To be able to diversify and refine the reactions, the collection of animations is extendable, but the right balance should be kept", "title": "" }, { "docid": "4f9df22aa072503e23384f62d4b5acdb", "text": "Convolutional neural networks are designed for dense data, but vision data is often sparse (stereo depth, point clouds, pen stroke, etc.). We present a method to handle sparse depth data with optional dense RGB, and accomplish depth completion and semantic segmentation changing only the last layer. Our proposal efficiently learns sparse features without the need of an additional validity mask. We show how to ensure network robustness to varying input sparsities. Our method even works with densities as low as 0.8% (8 layer lidar), and outperforms all published state-of-the-art on the Kitti depth completion benchmark.", "title": "" }, { "docid": "d9cdbc7dd4d8ae34a3d5c1765eb48072", "text": "Beanstalk is an educational game for children ages 6-10 teaching balance-fulcrum principles while folding in scientific inquiry and socio-emotional learning. This paper explores the incorporation of these additional dimensions using intrinsic motivation and a framing narrative. Four versions of the game are detailed, along with preliminary player data in a 2×2 pilot test with 64 children shaping the modifications of Beanstalk for much broader testing.", "title": "" }, { "docid": "7a82c189c756e9199ae0d394ed9ade7f", "text": "Since the late 1970s, globalization has become a phenomenon that has elicited polarizing responses from scholars, politicians, activists, and the business community. Several scholars and activists, such as labor unions, see globalization as an anti-democratic movement that would weaken the nation-state in favor of the great powers. There is no doubt that globalization, no matter how it is defined, is here to stay, and is causing major changes on the globe. Given the rapid proliferation of advances in technology, communication, means of production, and transportation, globalization is a challenge to health and well-being worldwide. On an international level, the average human lifespan is increasing primarily due to advances in medicine and technology. The trends are a reflection of increasing health care demands along with the technological advances needed to prevent, diagnose, and treat disease (IOM, 1997). Along with this increase in longevity comes the concern of finding commonalities in the treatment of health disparities for all people. In a seminal work by Friedman (2005), it is posited that the connecting of knowledge into a global network will result in eradication of most of the healthcare translational barriers we face today. Since healthcare is a knowledge-driven profession, it is reasonable to presume that global healthcare will become more than just a buzzword. This chapter looks at all aspects or components of globalization but focuses specifically on how the movement impacts the health of the people and the nations of the world. The authors propose to use the concept of health as a measuring stick of the claims made on behalf of globalization.", "title": "" }, { "docid": "9c44488f9af6ac04c0379c015bb1769b", "text": "Cookie stuffing is an activity which allows unscrupulous actors online to defraud affiliate marketing programs by causing themselves to receive credit for purchases made by web users, even if the affiliate marketer did not actively perform any marketing for the affiliate program. Using two months of HTTP request logs from a large public university, we present an empirical study of fraud in affiliate marketing programs. First, we develop an efficient, decision-tree based technique for detecting cookie-stuffing in HTTP request logs. Our technique replicates domain-informed human labeling of the same data with 93.3% accuracy. Second, we find that over one-third of publishers in affiliate marketing programs use fraudulent cookie-stuffing techniques in an attempt to claim credit from online retailers for illicit referrals. However, most realized conversions are credited to honest publishers. Finally, we present a stake holder analysis of affiliate marketing fraud and find that the costs and rewards of affiliate marketing programs are spread across all parties involved. Biography Peter Snyder is pursuing a Ph.D. in the Department of Computer Science at the University of Illinois at Chicago. He received his B.A. in political science at Lawrence University, with a focus on economics. His current research focuses on the security and privacy of web browsing. His current projects include measuring the popularity, desirability and security costs of browser complexity, and investigating alternative web systems that prioritize client security and code predictability at minimal cost to web-author expressivity. ** ALL ARE WELCOME ** Host: Professor Kehuan Zhang (Tel: 3943-8391, Email: khzhang@ie.cuhk.edu.hk) Enquiries: Information Engineering Dept., CUHK (Tel.: 3943-8385)", "title": "" }, { "docid": "28cf177349095e7db4cdaf6c9c4a6cb1", "text": "Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks.", "title": "" }, { "docid": "7025d357898c5997e225299f398c42f0", "text": "UNLABELLED\nAnnotating genetic variants, especially non-coding variants, for the purpose of identifying pathogenic variants remains a challenge. Combined annotation-dependent depletion (CADD) is an algorithm designed to annotate both coding and non-coding variants, and has been shown to outperform other annotation algorithms. CADD trains a linear kernel support vector machine (SVM) to differentiate evolutionarily derived, likely benign, alleles from simulated, likely deleterious, variants. However, SVMs cannot capture non-linear relationships among the features, which can limit performance. To address this issue, we have developed DANN. DANN uses the same feature set and training data as CADD to train a deep neural network (DNN). DNNs can capture non-linear relationships among features and are better suited than SVMs for problems with a large number of samples and features. We exploit Compute Unified Device Architecture-compatible graphics processing units and deep learning techniques such as dropout and momentum training to accelerate the DNN training. DANN achieves about a 19% relative reduction in the error rate and about a 14% relative increase in the area under the curve (AUC) metric over CADD's SVM methodology.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAll data and source code are available at https://cbcl.ics.uci.edu/public_data/DANN/.", "title": "" }, { "docid": "8bbc2ce1849d65425bece5ada5890b71", "text": "The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students’ performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO). A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.", "title": "" } ]
scidocsrr
2b529cc37ca3c1276697d6375a064729
Melanoma exosomes educate bone marrow progenitor cells toward a pro-metastatic phenotype through MET
[ { "docid": "97ef62d13180ee6bb44ec28ff3b3d53e", "text": "Glioblastoma tumour cells release microvesicles (exosomes) containing mRNA, miRNA and angiogenic proteins. These microvesicles are taken up by normal host cells, such as brain microvascular endothelial cells. By incorporating an mRNA for a reporter protein into these microvesicles, we demonstrate that messages delivered by microvesicles are translated by recipient cells. These microvesicles are also enriched in angiogenic proteins and stimulate tubule formation by endothelial cells. Tumour-derived microvesicles therefore serve as a means of delivering genetic information and proteins to recipient cells in the tumour environment. Glioblastoma microvesicles also stimulated proliferation of a human glioma cell line, indicating a self-promoting aspect. Messenger RNA mutant/variants and miRNAs characteristic of gliomas could be detected in serum microvesicles of glioblastoma patients. The tumour-specific EGFRvIII was detected in serum microvesicles from 7 out of 25 glioblastoma patients. Thus, tumour-derived microvesicles may provide diagnostic information and aid in therapeutic decisions for cancer patients through a blood test.", "title": "" } ]
[ { "docid": "10fff590f9c8e99ebfd1b4b4e453241f", "text": "Object-oriented programming has many advantages over conventional procedural programming languages for constructing highly flexible, adaptable, and extensible systems. Therefore a transformation of procedural programs to object-oriented architectures becomes an important process to enhance the reuse of procedural programs. Moreover, it would be useful to assist by automatic methods the software developers in transforming procedural code into an equivalent object-oriented one. In this paper we aim at introducing an agglomerative hierarchical clustering algorithm that can be used for assisting software developers in the process of transforming procedural code into an object-oriented architecture. We also provide a code example showing how our approach works, emphasizing, this way, the potential of our proposal.", "title": "" }, { "docid": "f0c149dd3cb05b694c1eae9986d465f4", "text": "Decomposition is a basic strategy in traditional multiobjective optimization. However, it has not yet been widely used in multiobjective evolutionary optimization. This paper proposes a multiobjective evolutionary algorithm based on decomposition (MOEA/D). It decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Each subproblem is optimized by only using information from its several neighboring subproblems, which makes MOEA/D have lower computational complexity at each generation than MOGLS and nondominated sorting genetic algorithm II (NSGA-II). Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjective optimization problems. It has been shown that MOEA/D using objective normalization can deal with disparately-scaled objectives, and MOEA/D with an advanced decomposition method can generate a set of very evenly distributed solutions for 3-objective test instances. The ability of MOEA/D with small population, the scalability and sensitivity of MOEA/D have also been experimentally investigated in this paper.", "title": "" }, { "docid": "7fefe01183ad6c9c897b83f9b9bbe5be", "text": "The Pap smear test is a manual screening procedure that is used to detect precancerous changes in cervical cells based on color and shape properties of their nuclei and cytoplasms. Automating this procedure is still an open problem due to the complexities of cell structures. In this paper, we propose an unsupervised approach for the segmentation and classification of cervical cells. The segmentation process involves automatic thresholding to separate the cell regions from the background, a multi-scale hierarchical segmentation algorithm to partition these regions based on homogeneity and circularity, and a binary classifier to finalize the separation of nuclei from cytoplasm within the cell regions. Classification is posed as a grouping problem by ranking the cells based on their feature characteristics modeling abnormality degrees. The proposed procedure constructs a tree using hierarchical clustering, and then arranges the cells in a linear order by using an optimal leaf ordering algorithm that maximizes the similarity of adjacent leaves without any requirement for training examples or parameter adjustment. Performance evaluation using two data sets show the effectiveness of the proposed approach in images having inconsistent staining, poor contrast, and overlapping cells.", "title": "" }, { "docid": "775e78af608c07853af2e2c31a59bf5c", "text": "This investigation compared the effect of high-volume (VOL) versus high-intensity (INT) resistance training on stimulating changes in muscle size and strength in resistance-trained men. Following a 2-week preparatory phase, participants were randomly assigned to either a high-volume (VOL; n = 14, 4 × 10-12 repetitions with ~70% of one repetition maximum [1RM], 1-min rest intervals) or a high-intensity (INT; n = 15, 4 × 3-5 repetitions with ~90% of 1RM, 3-min rest intervals) training group for 8 weeks. Pre- and posttraining assessments included lean tissue mass via dual energy x-ray absorptiometry, muscle cross-sectional area and thickness of the vastus lateralis (VL), rectus femoris (RF), pectoralis major, and triceps brachii muscles via ultrasound images, and 1RM strength in the back squat and bench press (BP) exercises. Blood samples were collected at baseline, immediately post, 30 min post, and 60 min postexercise at week 3 (WK3) and week 10 (WK10) to assess the serum testosterone, growth hormone (GH), insulin-like growth factor-1 (IGF1), cortisol, and insulin concentrations. Compared to VOL, greater improvements (P < 0.05) in lean arm mass (5.2 ± 2.9% vs. 2.2 ± 5.6%) and 1RM BP (14.8 ± 9.7% vs. 6.9 ± 9.0%) were observed for INT. Compared to INT, area under the curve analysis revealed greater (P < 0.05) GH and cortisol responses for VOL at WK3 and cortisol only at WK10. Compared to WK3, the GH and cortisol responses were attenuated (P < 0.05) for VOL at WK10, while the IGF1 response was reduced (P < 0.05) for INT. It appears that high-intensity resistance training stimulates greater improvements in some measures of strength and hypertrophy in resistance-trained men during a short-term training period.", "title": "" }, { "docid": "f07d44c814bdb87ffffc42ace8fd53a4", "text": "We describe a batch method that uses a sizeable fraction of the training set at each iteration, and that employs secondorder information. • To improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. • This inherently gives the algorithm a stochastic flavor that can cause instability in L-BFGS. • We show how to perform stable quasi-Newton updating in the multi-batch setting, illustrate the behavior of the algorithm in a distributed computing platform, and study its convergence properties for both the convex and nonconvex cases. Introduction min w∈Rd F (w) = 1 n n ∑ i=1 f (w;x, y) Idea: select a sizeable sample Sk ⊂ {1, . . . , n} at every iteration and perform quasi-Newton steps 1. Distributed computing setting: distributed gradient computation (with faults) 2. Multi-Batch setting: samples are changed at every iteration to accelerate learning Goal: show that stable quasi-Newton updating can be achieved in both settings without incurring extra computational cost, or special synchronization Issue: samples used at the beginning and at the end of every iteration are different • potentially harmful for quasi-Newton methods Key: controlled sampling • consecutive samples overlap Sk ∩ Sk+1 = Ok 6= ∅ • gradient differences based on this overlap – stable quasi-Newton updates Multi-Batch L-BFGS Method At the k-th iteration: • sample Sk ⊂ {1, . . . , n} chosen, and iterates updated via wk+1 = wk − αkHkgk k where gk k is the batch gradient g Sk k = 1 |Sk| ∑ i∈Sk ∇f ( wk;x , y ) and Hk is the inverse BFGS Hessian approximation Hk+1 =V T k HkVk + ρksks T k ρk = 1 yT k sk , Vk = 1− ρkyksk • to ensure consistent curvature pair updates sk+1 = wk+1 − wk, yk+1 = gk k+1 − g Ok k where gk k+1 and g Ok k are gradients based on the overlapping samples only Ok = Sk ∩ Sk+1 Sample selection:", "title": "" }, { "docid": "714515b82c7411550ffd1aa00acde62f", "text": "This paper presents a vision guidance approach using an image-based visual servo (IBVS) for an aerial manipulator combining a multirotor with a multidegree of freedom robotic arm. To take into account the dynamic characteristics of the combined manipulation platform, the kinematic and dynamic models of the combined system are derived. Based on the combined model, a passivity-based adaptive controller which can be applied on both position and velocity control is designed. The position control is utilized for waypoint tracking such as taking off and landing, and the velocity control is engaged when the platform is guided by visual information. In addition, a guidance law utilizing IBVS is employed with modifications. To secure the view of an object with an eye-in-hand camera, IBVS is utilized with images taken from a fisheye camera. Also, to compensate underactuation of the multirotor, an image adjustment method is developed. With the proposed control and guidance laws, autonomous flight experiments involving grabbing and transporting an object are carried out. Successful experimental results demonstrate that the proposed approaches can be applied in various types of manipulation missions.", "title": "" }, { "docid": "727add0c0e44d0044d7f58b3633160d2", "text": "Case II: Deterministic transitions, continuous state Case III: “Mildly” stochastic trans., finite state: P(s,a,s’) ≥ 1 δ Case IV: Bounded-noise stochastic transitions, continuous state: st+1 = T(st, at) + wt , ||wt|| ≤ ∆ Planning and Learning in Environments with Delayed Feedback Thomas J. Walsh, Ali Nouri, Lihong Li, Michael L. Littman Rutgers Laboratory for Real Life Reinforcement Learning Computer Science Department, Rutgers University, Piscataway NJ", "title": "" }, { "docid": "9c193f2f6611754905b2ac1f0dcff3ca", "text": "This article discusses the implementation of iris recognition in improving the security of border control systems in the United Arab Emirates. The article explains the significance of the implemented solution and the advantages the government has gained to-date. The UAE deployment of iris recognition technology is currently the largest in the world, both in terms of number of Iris records enrolled (more than 840,751) and number of iris comparisons performed daily 6,225,761,155 (6.2 billion) in ‘all-against-all’ search mode. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5f6de33d401f2a389a9a143b2b0de0c5", "text": "A novel broadband RHCP/LHCP reconfigurable patch antenna array using an E-shaped patch antenna element is investigated. By applying particle swarm optimization (PSO), a challenging, combined S11-AR bandwidth of 17% was achieved and verified through measurement for the isolated element using MEMS switches at an overall substrate thickness of 0.092λ0. The achieved bandwidth is significantly higher than the current state-of-the-art in single-layer, single-feed circularly polarized (CP) patch element designs with similar substrate thickness. A small percentage of the upper frequency band experiences a pronounced beam squint similar to other thick substrate CP patch antennas. To overcome the beam squint, a novel rotated-element configuration is implemented to force pattern symmetry. Derivations of pattern symmetry and network effects are also shown. The final design prototype using rotated elements provides a measured 20% S11-AR bandwidth with good radiation pattern stability.", "title": "" }, { "docid": "fe6d2d70c2afcb7129bcb030f0486d75", "text": "A novel 24-GHz patch series-fed antenna array is presented for radar system. It consists of an 8×8 transmitting antenna array and two 8×4 co-aperture receiving antenna arrays, which can not only achieve good suppression of side lobes and high aperture efficiency, but also avoid ambiguity in angle measurement. The simulated results show that the gains of the transmitting and the receiving antenna arrays are respectively 20.9dBi and 15.9dBi, while the suppressions of side lobes in two planes are both over 20dB.", "title": "" }, { "docid": "258f246b97bba091e521cd265126191a", "text": "This paper presents a method of electric tunability using varactor diodes installed on SIR coaxial resonators and associated filters. Using varactor diodes connected in parallel, in combination with the SIR coaxial resonator, makes it possible, by increasing the number of varactor diodes, to expand the tuning range and maintain the unloaded quality factor of the resonator. A second order filter, tunable in center frequency, was built with these resonators, providing a very large tuning range.", "title": "" }, { "docid": "4a70c88a031195a5593aaa403b9681cd", "text": "In this paper, we are interested in two seemingly different concepts: adversarial training and generative adversarial networks (GANs). Particularly, how these techniques help to improve each other. To this end, we analyze the limitation of adversarial training as the defense method, starting from questioning how well the robustness of a model can generalize. Then, we successfully improve the generalizability via data augmentation by the “fake” images sampled from generative adversarial network. After that, we are surprised to see that the resulting robust classifier leads to a better generator, for free. We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work. Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker in a single network. After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively. In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in (Madry etla., 2017), while our generator achieves competitive performance compared with SN-GAN (Miyato and Koyama, 2018). Source code is publicly available online at https://github.com/anonymous.", "title": "" }, { "docid": "36209810c1a842c871b639220ba63036", "text": "This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the discriminator. With the feedback from the label information, the generator is able to learn faster and achieve better generated image quality. Empirically, we show that the proposed ArtGAN is capable to create realistic artwork, as well as generate compelling real world images that globally look natural with clear shape on CIFAR-10.", "title": "" }, { "docid": "31bb5687b284844596f437774b8b11ce", "text": "In this paper, a new algorithm for calculating the QR decomposition (QRD) of a polynomial matrix is introduced. This algorithm amounts to transforming a polynomial matrix to upper triangular form by application of a series of paraunitary matrices such as elementary delay and rotation matrices. It is shown that this algorithm can also be used to formulate the singular value decomposition (SVD) of a polynomial matrix, which essentially amounts to diagonalizing a polynomial matrix again by application of a series of paraunitary matrices. Example matrices are used to demonstrate both types of decomposition. Mathematical proofs of convergence of both decompositions are also outlined. Finally, a possible application of such decompositions in multichannel signal processing is discussed.", "title": "" }, { "docid": "37a47bd2561b534d5734d250d16ff1c2", "text": "Many chronic eye diseases can be conveniently investigated by observing structural changes in retinal blood vessel diameters. However, detecting changes in an accurate manner in face of interfering pathologies is a challenging task. The task is generally performed through an automatic computerized process. The literature shows that powerful methods have already been proposed to identify vessels in retinal images. Though a significant progress has been achieved toward methods to separate blood vessels from the uneven background, the methods still lack the necessary sensitivity to segment fine vessels. Recently, a multi-scale line-detector method proved its worth in segmenting thin vessels. This paper presents modifications to boost the sensitivity of this multi-scale line detector. First, a varying window size with line-detector mask is suggested to detect small vessels. Second, external orientations are fed to steer the multi-scale line detectors into alignment with flow directions. Third, optimal weights are suggested for weighted linear combinations of individual line-detector responses. Fourth, instead of using one global threshold, a hysteresis threshold is proposed to find a connected vessel tree. The overall impact of these modifications is a large improvement in noise removal capability of the conventional multi-scale line-detector method while finding more of the thin vessels. The contrast-sensitive steps are validated using a publicly available database and show considerable promise for the suggested strategy.", "title": "" }, { "docid": "d90cbc05e127950ec3dbc39505b48326", "text": "Blockchain is a novel technology that is rising a lot of interest in the industrial and research sectors because its properties of decentralisation, immutability and data integrity. Initially, the underlying consensus mechanism has been designed for permissionless blockchain on trustless network model through the proof-of-work, i.e. a mathematical challenge which requires high computational power. This solution suffers of poor performances, hence alternative consensus algorithms as the proof-of-stake have been proposed. Conversely, for permissioned blockchain, where participants are known and authenticated, variants of distributed consensus algorithms have been employed. However, most of them comes out without formal expression of security analysis and trust assumptions because the absence of an established knowledge. Therefore the lack of adequate analysis on these algorithms hinders any cautious evaluation of their effectiveness in a real-world setting where systems are deployed over trustless networks, i.e. Internet. In this thesis we analyse security and performances of permissioned blockchain. Thus we design a general model for such a scenario in a way to propose a general benchmark for the experimental evaluations. This work brings two main contributions. The first contribution concern the analysis of Proof-of-Authority, a Byzantine FaultTolerant consensus protocol. We compare two of the main algorithms, named Aura and Clique, with respect the well-established Practical Byzantine Fault-Tolerant, in terms of security and performances. We refer the CAP theorem for the consistency, availability and partition tolerance guarantees and we describe a possible attack scenario in which one of the algorithms loses consistency. The analysis advocates that Proof-of-Authority for permissioned blockchains deployed over WANs experimenting Byzantine nodes, do not provide adequate consistency guarantees for scenarios where data integrity is essential. We claim that actually the Practical Byzantine Fault-Tolerant can fit better for permissioned blockchain, despite a limited loss in terms of performance. The second contribution is the realisation of a benchmark for practical evaluations. We design a general model for permissioned blockchain under which benchmarking performances and security guarantees. However, because no experiment can verify all the possible security issues permitted by the model, we prototype an adversarial model which simulate three attacks, feasible for a blockchain system. We then integrate this attacker model in a real blockchain client to evaluate the resiliency of the system and how much the attacks impact performances and security guarantees.", "title": "" }, { "docid": "42979dd6ad989896111ef4de8d26b2fb", "text": "Online dating services let users expand their dating pool beyond their social network and specify important characteristics of potential partners. To assess compatibility, users share personal information — e.g., identifying details or sensitive opinions about sexual preferences or worldviews — in profiles or in one-on-one communication. Thus, participating in online dating poses inherent privacy risks. How people reason about these privacy risks in modern online dating ecosystems has not been extensively studied. We present the results of a survey we designed to examine privacy-related risks, practices, and expectations of people who use or have used online dating, then delve deeper using semi-structured interviews. We additionally analyzed 400 Tinder profiles to explore how these issues manifest in practice. Our results reveal tensions between privacy and competing user values and goals, and we demonstrate how these results can inform future designs.", "title": "" }, { "docid": "d3a455e1c8a17f1111e380607b9d4dd0", "text": "This paper addresses a robust method for optimal Fractional order PID (FOPID) control of automatic Voltage Regulation (AVR) system to damp terminal voltage oscillation following disturbances in power systems. The optimization is carried out by the Imperialist Competitive algorithm Optimization (ICA) to improve their situation. The optimal tuning problem of the FOPID gains to control of AVR system against parametric uncertainties is formulated as an optimization problem according to time domain based objective function. It is carried out under multiple operation conditions to achieve the desired level of terminal voltage regulation. The results analysis reveals that the ICA based FOPID type AVR control system is effective and provides good terminal voltage oscillations damping ability.", "title": "" }, { "docid": "813238ec00d6ee78ff9a584a152377f6", "text": "Exercise-induced muscle injury in humans frequently occurs after unaccustomed exercise, particularly if the exercise involves a large amount of eccentric (muscle lengthening) contractions. Direct measures of exercise-induced muscle damage include cellular and subcellular disturbances, particularly Z-line streaming. Several indirectly assessed markers of muscle damage after exercise include increases in T2 signal intensity via magnetic resonance imaging techniques, prolonged decreases in force production measured during both voluntary and electrically stimulated contractions (particularly at low stimulation frequencies), increases in inflammatory markers both within the injured muscle and in the blood, increased appearance of muscle proteins in the blood, and muscular soreness. Although the exact mechanisms to explain these changes have not been delineated, the initial injury is ascribed to mechanical disruption of the fiber, and subsequent damage is linked to inflammatory processes and to changes in excitation-contraction coupling within the muscle. Performance of one bout of eccentric exercise induces an adaptation such that the muscle is less vulnerable to a subsequent bout of eccentric exercise. Although several theories have been proposed to explain this \"repeated bout effect,\" including altered motor unit recruitment, an increase in sarcomeres in series, a blunted inflammatory response, and a reduction in stress-susceptible fibers, there is no general agreement as to its cause. In addition, there is controversy concerning the presence of sex differences in the response of muscle to damage-inducing exercise. In contrast to the animal literature, which clearly shows that females experience less damage than males, research using human studies suggests that there is either no difference between men and women or that women are more prone to exercise-induced muscle damage than are men.", "title": "" } ]
scidocsrr
919fc12c8725c3ff48e0bb7fce652fe0
Broadband CPW-Fed Circularly Polarized Square Slot Antenna With Lightening-Shaped Feedline and Inverted-L Grounded Strips
[ { "docid": "a1d0bf0d28bbe3dd568e7e01bc9d59c3", "text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.", "title": "" }, { "docid": "fb3e9503a9f4575f5ecdbfaaa80638d0", "text": "This paper presents a new wideband circularly polarized square slot antenna (CPSSA) with a coplanar waveguide (CPW) feed. The proposed antenna features two inverted-L grounded strips around two opposite corners of the slot and a widened tuning stub protruded into the slot from the signal strip of the CPW. Broadside circular-polarization (CP) radiation can be easily obtained using a simple design procedure. For the optimized antenna prototype, the measured bandwidth with an axial ratio (AR) of less than 3 dB is larger than 25% and the measured VSWR les 2 impedance bandwidth is as large as 52%.", "title": "" }, { "docid": "6fe204d344dd0e05988d5383af28a165", "text": "A novel design of a circularly polarized annular-ring slot antenna is discussed. The circular polarization is attained through a newly proposed double-bent microstripline that feeds the antenna at two different positions. Several structural parameters were experimentally studied with care to establish a design procedure, which was subsequently drawn into a design flow chart. Validation was carried out using the antennas designed at 3.5 and 1.59 GHz. The measured 3-dB axial-ratio bandwidth (ARBW) for the former is 10.5% and for the latter, 10.0%, which is larger than the 8.5% 3-dB ARBW required by an Inmarsat application.", "title": "" } ]
[ { "docid": "4cbfabfec16dcf1153e83baa4433a6c2", "text": "BACKGROUND\nTo investigate the level of posttraumatic stress and depressive symptoms, and background risk and protective factors that might increase or ameliorate this distress amongst unaccompanied asylum-seeking children and adolescents (UASC).\n\n\nMETHODS\nCross-sectional survey carried out in London. Participants were 78 UASC aged 13-18 years, predominantly from the Balkans and Africa, compared with 35 accompanied refugee children. Measures included self-report questionnaires of war trauma, posttraumatic stress and depressive symptoms.\n\n\nRESULTS\nUASC had experienced high levels of losses and war trauma, and posttraumatic stress symptoms. Predictors of high posttraumatic symptoms included low-support living arrangements, female gender and trauma events, and increasing age only amongst the UASC. High depressive scores were associated with female gender, and region of origin amongst the UASC.\n\n\nCONCLUSION\nUASC might have less psychological distress if offered high-support living arrangements and general support as they approach the age of 18 years, but prospective studies are required to investigate the range of risk and protective factors.", "title": "" }, { "docid": "4cba17be3bb11ba3f2051f5e574a2789", "text": "The recent advances in RFID offer vast opportunities for research, development and innovation in agriculture. The aim of this paper is to give readers a comprehensive view of current applications and new possibilities, but also explain the limitations and challenges of this technology. RFID has been used for years in animal identification and tracking, being a common practice in many farms. Also it has been used in the food chain for traceability control. The implementation of sensors in tags, make possible to monitor the cold chain of perishable food products and the development of new applications in fields like environmental monitoring, irrigation, specialty crops and farm machinery. However, it is not all advantages. There are also challenges and limitations that should be faced in the next years. The operation in harsh environments, with dirt, extreme temperatures; the huge volume of data that are difficult to manage; the need of longer reading ranges, due to the reduction of signal strength due to propagation in crop canopy; the behavior of the different frequencies, understanding what is the right one for each application; the diversity of the standards and the level of granularity are some of them. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8f8bd08f73ee191a1f826fa0d61ff149", "text": "We propose an algorithm for designing linear equalizers that maximize the structural similarity (SSIM) index between the reference and restored signals. The SSIM index has enjoyed considerable application in the evaluation of image processing algorithms. Algorithms, however, have not been designed yet to explicitly optimize for this measure. The design of such an algorithm is nontrivial due to the nonconvex nature of the distortion measure. In this paper, we reformulate the nonconvex problem as a quasi-convex optimization problem, which admits a tractable solution. We compute the optimal solution in near closed form, with complexity of the resulting algorithm comparable to complexity of the linear minimum mean squared error (MMSE) solution, independent of the number of filter taps. To demonstrate the usefulness of the proposed algorithm, it is applied to restore images that have been blurred and corrupted with additive white gaussian noise. As a special case, we consider blur-free image denoising. In each case, its performance is compared to a locally adaptive linear MSE-optimal filter. We show that the images denoised and restored using the SSIM-optimal filter have higher SSIM index, and superior perceptual quality than those restored using the MSE-optimal adaptive linear filter. Through these results, we demonstrate that a) designing image processing algorithms, and, in particular, denoising and restoration-type algorithms, can yield significant gains over existing (in particular, linear MMSE-based) algorithms by optimizing them for perceptual distortion measures, and b) these gains may be obtained without significant increase in the computational complexity of the algorithm.", "title": "" }, { "docid": "33b63fe07849be342beaf3b31dc0d6da", "text": "Infrared sensors are used in Photoplethysmography measurements (PPG) to get blood flow parameters in the vascular system. It is a simple, low-cost non-invasive optical technique that is commonly placed on a finger or toe, to detect blood volume changes in the micro-vascular bed of tissue. The sensor use an infrared source and a photo detector to detect the infrared wave which is not absorbed. The recorded infrared waveform at the detector side is called the PPG signal. This paper reviews the various blood flow parameters that can be extracted from this PPG signal including the existence of an endothelial disfunction as an early detection tool of vascular diseases.", "title": "" }, { "docid": "0f45ccb6924ea5a5f54ae14bb13d7b1d", "text": "HER2 (human epidermal growth factor receptor 2) is overexpressed in 15 to 20% of breast cancer. Anti-HER2 targeted therapies, notably trastuzumab, have transformed the natural history of this disease. Trastuzumab emtansine, consisting of trastuzumab coupled to a cytotoxic agent, emtansine (DM1), by a stable linker, has been approved in November 2013 by the European Medicine Agency. Trastuzumab emtansine targets and inhibits HER2 signaling, but also allows emtansine to be directly delivered inside HER2-positive cancer cells. It is indicated as single-agent in taxane and trastuzumab-pretreated HER2-positive breast cancer patients with metastatic and locally recurrent unresecable disease or relapsing within 6 months of the end of adjuvant therapy. This indication is based on the results of the EMILIA study, an open label phase III randomized trial comparing trastuzumab emtansine to lapatinib-capecitabine. The two primary endpoints were reached. The progression-free survival was 6.4 months in the lapatinib-capecitabine arm versus 9.6 months for the trastuzumab emtansine arm (HR=0.65; 95% CI=0.55-0.77, P<0.001). Overall survival at the second interim analysis was 25.1 months in the lapatinib-capecitabine arm versus 30.9 months in the trastuzumab emtansine arm (HR=0.68; 95% CI=0.55-0.85, P<0.001). Moreover, adverse events were more frequent in the lapatinib-capecitabine arm.", "title": "" }, { "docid": "450f13659ece54bee1b4fe61cc335eb2", "text": "Though considerable effort has recently been devoted to hardware realization of one-dimensional chaotic systems, the influence of implementation inaccuracies is often underestimated and limited to non-idealities in the non-linear map. Here we investigate the consequences of sample-and-hold errors. Two degrees of freedom in the design space are considered: the choice of the map and the sample-and-hold architecture. Current-mode systems based on Bernoulli Shift, on Tent Map and on Tailed Tent Map are taken into account and coupled with an order-one model of sample-and-hold to ascertain error causes and suggest implementation improvements. key words: chaotic systems, analog circuits, sample-and-hold errors", "title": "" }, { "docid": "92dabad10ff49f307138e0738d8ebd50", "text": "Traffic forecasting is an important task which is required by overload warning and capacity planning for mobile networks. Based on analysis of real data collected by China Mobile Communications Corporation (CMCC) Heilongjiang Co. Ltd, this paper proposes to use the multiplicative seasonal ARIMA models for mobile communication traffic forecasting. Experiments and test results show that the whole solution presented in this paper is feasible and effective to fulfill the requirements in traffic forecasting application for mobile networks.", "title": "" }, { "docid": "66044816ca1af0198acd27d22e0e347e", "text": "BACKGROUND\nThe Close Kinetic Chain Upper Extremity Stability Test (CKCUES test) is a low cost shoulder functional test that could be considered as a complementary and objective clinical outcome for shoulder performance evaluation. However, its reliability was tested only in recreational athletes' males and there are no studies comparing scores between sedentary and active samples. The purpose was to examine inter and intrasession reliability of CKCUES Test for samples of sedentary male and female with (SIS), for samples of sedentary healthy male and female, and for male and female samples of healthy upper extremity sport specific recreational athletes. Other purpose was to compare scores within sedentary and within recreational athletes samples of same gender.\n\n\nMETHODS\nA sample of 108 subjects with and without SIS was recruited. Subjects were tested twice, seven days apart. Each subject performed four test repetitions, with 45 seconds of rest between them. The last three repetitions were averaged and used to statistical analysis. Intraclass Correlation Coefficient ICC2,1 was used to assess intrasession reliability of number of touches score and ICC2,3 was used to assess intersession reliability of number of touches, normalized score, and power score. Test scores within groups of same gender also were compared. Measurement error was determined by calculating the Standard Error of the Measurement (SEM) and Minimum detectable change (MDC) for all scores.\n\n\nRESULTS\nThe CKCUES Test showed excellent intersession reliability for scores in all samples. Results also showed excellent intrasession reliability of number of touches for all samples. Scores were greater in active compared to sedentary, with exception of power score. All scores were greater in active compared to sedentary and SIS males and females. SEM ranged from 1.45 to 2.76 touches (based on a 95% CI) and MDC ranged from 2.05 to 3.91(based on a 95% CI) in subjects with and without SIS. At least three touches are needed to be considered a real improvement on CKCUES Test scores.\n\n\nCONCLUSION\nResults suggest CKCUES Test is a reliable tool to evaluate upper extremity functional performance for sedentary, for upper extremity sport specific recreational, and for sedentary males and females with SIS.", "title": "" }, { "docid": "bd3feae3ff8f8546efc1290e325b5a4e", "text": "A bond pad failure mechanism of galvanic corrosion was studied. Analysis results showed that over-etch process, EKC and DI water over cleaning revealed more pitting with Cu seed due to galvanic corrosion. To control and eliminate galvanic corrosion, the etch recipe was optimized and etch time was reduced about 15% to prevent damaging the native oxide. EKC cleaning time was remaining unchanged in order to maintain bond pad F level at minimum level. In this study, the PRS process was also optimized and CF4 gas ratio was reduced about 45%. Moreover, 02 process was added after PRS process so as to increase the native oxide layer on Al bondpads to prevent galvanic corrosion.", "title": "" }, { "docid": "68489ec6e39ffd95d5df7d6817474cde", "text": "Foster B-trees are a new variant of B-trees that combines advantages of prior B-tree variants optimized for many-core processors and modern memory hierarchies with flash storage and nonvolatile memory. Specific goals include: (i) minimal concurrency control requirements for the data structure, (ii) efficient migration of nodes to new storage locations, and (iii) support for continuous and comprehensive self-testing. Like Blink-trees, Foster B-trees optimize latching without imposing restrictions or specific designs on transactional locking, for example, key range locking. Like write-optimized B-trees, and unlike Blink-trees, Foster B-trees enable large writes on RAID and flash devices as well as wear leveling and efficient defragmentation. Finally, they support continuous and inexpensive yet comprehensive verification of all invariants, including all cross-node invariants of the B-tree structure. An implementation and a performance evaluation show that the Foster B-tree supports high concurrency and high update rates without compromising consistency, correctness, or read performance.", "title": "" }, { "docid": "37b3b7a5af646fbc00708f136641f617", "text": "Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and/or the complexity of 3D scenes (e.g., clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. Both the tool and dataset are available at http://scenenn.net.", "title": "" }, { "docid": "9128e3786ba8d0ab36aa2445d84de91c", "text": "A technique for the correction of flat or inverted nipples is presented. The procedure is a combination of the square flap method, which better shapes the corrected nipple, and the dermal sling, which provides good support for the repaired nipple.", "title": "" }, { "docid": "3d3110b19142e9a01bf4252742ce9586", "text": "Detecting unsolicited content and the spammers who create it is a long-standing challenge that affects all of us on a daily basis. The recent growth of richly-structured social networks has provided new challenges and opportunities in the spam detection landscape. Motivated by the Tagged.com social network, we develop methods to identify spammers in evolving multi-relational social networks. We model a social network as a time-stamped multi-relational graph where vertices represent users, and edges represent different activities between them. To identify spammer accounts, our approach makes use of structural features, sequence modelling, and collective reasoning. We leverage relational sequence information using k-gram features and probabilistic modelling with a mixture of Markov models. Furthermore, in order to perform collective reasoning and improve the predictive power of a noisy abuse reporting system, we develop a statistical relational model using hinge-loss Markov random fields (HL-MRFs), a class of probabilistic graphical models which are highly scalable. We use Graphlab Create and Probabilistic Soft Logic (PSL) to prototype and experimentally evaluate our solutions on internet-scale data from Tagged.com. Our experiments demonstrate the effectiveness of our approach, and show that models which incorporate the multi-relational nature of the social network significantly gain predictive performance over those that do not.", "title": "" }, { "docid": "072b36d53de6a1a1419b97a1503f8ecd", "text": "In classical control of brushless dc (BLDC) motors, flux distribution is assumed trapezoidal and fed current is controlled rectangular to obtain a desired constant torque. However, in reality, this assumption may not always be correct, due to nonuniformity of magnetic material and design trade-offs. These factors, together with current controller limitation, can lead to an undesirable torque ripple. This paper proposes a new torque control method to attenuate torque ripple of BLDC motors with un-ideal back electromotive force (EMF) waveforms. In this method, the action time of pulses, which are used to control the corresponding switches, are calculated in the torque controller regarding actual back EMF waveforms in both normal conduction period and commutation period. Moreover, the influence of finite dc bus supply voltage is considered in the commutation period. Simulation and experimental results are shown that, compared with conventional rectangular current control, the proposed torque control method results in apparent reduction of the torque ripple.", "title": "" }, { "docid": "4bac5fa3b753c6da269a8c9d6d6ecb5a", "text": "The use of antimicrobial compounds in food animal production provides demonstrated benefits, including improved animal health, higher production and, in some cases, reduction in foodborne pathogens. However, use of antibiotics for agricultural purposes, particularly for growth enhancement, has come under much scrutiny, as it has been shown to contribute to the increased prevalence of antibiotic-resistant bacteria of human significance. The transfer of antibiotic resistance genes and selection for resistant bacteria can occur through a variety of mechanisms, which may not always be linked to specific antibiotic use. Prevalence data may provide some perspective on occurrence and changes in resistance over time; however, the reasons are diverse and complex. Much consideration has been given this issue on both domestic and international fronts, and various countries have enacted or are considering tighter restrictions or bans on some types of antibiotic use in food animal production. In some cases, banning the use of growth-promoting antibiotics appears to have resulted in decreases in prevalence of some drug resistant bacteria; however, subsequent increases in animal morbidity and mortality, particularly in young animals, have sometimes resulted in higher use of therapeutic antibiotics, which often come from drug families of greater relevance to human medicine. While it is clear that use of antibiotics can over time result in significant pools of resistance genes among bacteria, including human pathogens, the risk posed to humans by resistant organisms from farms and livestock has not been clearly defined. As livestock producers, animal health experts, the medical community, and government agencies consider effective strategies for control, it is critical that science-based information provide the basis for such considerations, and that the risks, benefits, and feasibility of such strategies are fully considered, so that human and animal health can be maintained while at the same time limiting the risks from antibiotic-resistant bacteria.", "title": "" }, { "docid": "8386a4e4a2b5e2f21b1e8b7f1419b1b3", "text": "This hackathon will bring together a number of ontologies, ontology design patterns and high level semantic abstractions to create an ontology around the area of accident and risk.", "title": "" }, { "docid": "42d27f1a6ad81e13c449a08a6ada34d6", "text": "Face detection of comic characters is a necessary step in most applications, such as comic character retrieval, automatic character classification and comic analysis. However, the existing methods were developed for simple cartoon images or small size comic datasets, and detection performance remains to be improved. In this paper, we propose a Faster R-CNN based method for face detection of comic characters. Our contribution is twofold. First, for the binary classification task of face detection, we empirically find that the sigmoid classifier shows a slightly better performance than the softmax classifier. Second, we build two comic datasets, JC2463 and AEC912, consisting of 3375 comic pages in total for characters face detection evaluation. Experimental results have demonstrated that the proposed method not only performs better than existing methods, but also works for comic images with different drawing styles.", "title": "" }, { "docid": "1c7251c55cf0daea9891c8a522bbd3ec", "text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.", "title": "" }, { "docid": "473968c14db4b189af126936fd5486ca", "text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.", "title": "" } ]
scidocsrr
d07d990e99d1fce806fe9866c55422c1
Segment-based injection attacks against collaborative filtering recommender systems
[ { "docid": "e043f20a60df6399c2f93d064d61e648", "text": "Recent research in recommender systems has shown that collaborative filtering algorithms are highly susceptible to attacks that insert biased profile data. Theoretical analyses and empirical experiments have shown that certain attacks can have a significant impact on the recommendations a system provides. These analyses have generally not taken into account the cost of mounting an attack or the degree of prerequisite knowledge for doing so. For example, effective attacks often require knowledge about the distribution of user ratings: the more such knowledge is required, the more expensive the attack to be mounted. In our research, we are examining a variety of attack models, aiming to establish the likely practical risks to collaborative systems. In this paper, we examine user-based collaborative filtering and some attack models that are successful against it, including a limited knowledge \"bandwagon\" attack that requires only that the attacker identify a small number of very popular items and a user-focused \"favorite item\" attack that is also effective against item-based algorithms.", "title": "" } ]
[ { "docid": "ecce348941aeda57bd66dbd7836923e6", "text": "Moana (2016) continues a tradition of Disney princess movies that perpetuate gender stereotypes. The movie contains the usual Electral undercurrent, with Moana seeking to prove her independence to her overprotective father. Moana’s partner in her adventures, Maui, is overtly hypermasculine, a trait epitomized by a phallic fishhook that is critical to his identity. Maui’s struggles with shapeshifting also reflect male anxieties about performing masculinity. Maui violates the Mother Island, first by entering her cave and then by using his fishhook to rob her of her fertility. The repercussions of this act are the basis of the plot: the Mother Island abandons her form as a nurturing, youthful female (Te Fiti) focused on creation to become a vengeful lava monster (Te Kā). At the end, Moana successfully urges Te Kā to get in touch with her true self, a brave but simple act that is sufficient to bring back Te Fiti, a passive, smiling green goddess. The association of youthful, fertile females with good and witch-like infertile females with evil implies that women’s worth and well-being are dependent upon their procreative function. Stereotypical gender tropes that also include female abuse of power and a narrow conception of masculinity merit analysis in order to further progress in recognizing and addressing patterns of gender hegemony in popular Disney films.", "title": "" }, { "docid": "d83e069765ae88cadf16f300031fd48c", "text": "BACKGROUND\nPhotodynamic therapy (PDT) using topical 5-aminolaevulinic acid (5-ALA) as a photosensitizer has been reported in the treatment of both neoplastic and benign cutaneous disorders.\n\n\nOBJECTIVES\nTo evaluate the efficacy of photodynamic therapy in selected patients with Darier's disease (keratosis follicularis).\n\n\nMETHODS\nSix patients with Darier's disease were assessed before and after treatment with PDT using 5-ALA and mean fluence rates of 110-150 mW cm-2.\n\n\nRESULTS\nOf the six patients, one was unable to tolerate the treatment. Of the remaining five, all experienced an initial inflammatory response that lasted two to three weeks. In four of the five patients, this was followed by sustained clearance or improvement over a followup period of six months to three years. Three of these four patients were on systemic retinoids and the fourth had discontinued acitretin prior to PDT. In the fifth patient partial improvement was followed by recurrence after etretinate therapy was discontinued. Biopsy specimens taken immediately after the procedure in two patients demonstrated a mild inflammatory cell infiltrate in the dermis. A biopsy obtained eighteen months after PDT from a successfully treated area showed no signs of Darier's disease and a subtle increase of collagen in the upper dermis.\n\n\nCONCLUSIONS\nPhotodynamic therapy can be viewed as a potential adjunctive modality for Darier's disease but should not be considered as a substitute for retinoids in patients who require systemic treatment.", "title": "" }, { "docid": "b0f752f3886de8e5d4fe0f186a495c68", "text": "Granular materials composed of a mixture of grain sizes are notoriously prone to segregation during shaking or transport. In this paper, a binary mixture theory is used to formulate a model for kinetic sieving of large and small particles in thin, rapidly flowing avalanches, which occur in many industrial and geophysical free-surface flows. The model is based on a simple percolation idea, in which the small particles preferentially fall into underlying void space and lever large particles upwards. Exact steady-state solutions have been constructed for general steady uniform velocity fields, as well as time-dependent solutions for plug-flow, that exploit the decoupling of material columns in the avalanche. All the solutions indicate the development of concentration shocks, which are frequently observed in experiments. A shock-capturing numerical algorithm is formulated to solve general problems and is used to investigate segregation in flows with weak shear.", "title": "" }, { "docid": "83071476dae1d2a52e137683616668c2", "text": "We present a strategy to make productive use of semantically-related social data, from a user-centered semantic network, in order to help users (tourists and citizens in general) to discover cultural heritage, points of interest and available services in a smart city. This data can be used to personalize recommendations in a smart tourism application. Our approach is based on flow centrality metrics typically used in social network analysis: flow betweenness, flow closeness and eccentricity. These metrics are useful to discover relevant nodes within the network yielding nodes that can be interpreted as suggestions (venues or services) to users. We describe the semantic network built on graph model, as well as social metrics algorithms used to produce recommendations. We also present challenges and results from a prototypical implementation applied to the case study of the City of Puebla, Mexico.", "title": "" }, { "docid": "9b4dd57f571d0ec4ab9daf71549b6958", "text": "Concurrency errors, like data races and deadlocks, are difficult to find due to the large number of possible interleavings in a parallel program. Dynamic tools analyze a single observed execution of a program, and even with multiple executions they can not reveal possible errors in other reorderings. This work takes a single program observation and produces a set of alternative orderings of the synchronization primitives that lead to a concurrency error. The new reorderings are enforced under a happens-before detector to discard reorderings that are infeasible or do not produce any error report. We evaluate our approach against multiple repetitions of a state of the art happens-before detector. The results show that through interleaving inference more errors are found and the counterexamples enable easier reproducibility by the developer.", "title": "" }, { "docid": "ce9084c2ac96db6bca6ddebe925c3d42", "text": "Tactical driving decision making is crucial for autonomous driving systems and has attracted considerable interest in recent years. In this paper, we propose several practical components that can speed up deep reinforcement learning algorithms towards tactical decision making tasks: 1) nonuniform action skipping as a more stable alternative to action-repetition frame skipping, 2) a counterbased penalty for lanes on which ego vehicle has less right-of-road, and 3) heuristic inference-time action masking for apparently undesirable actions. We evaluate the proposed components in a realistic driving simulator and compare them with several baselines. Results show that the proposed scheme provides superior performance in terms of safety, efficiency, and comfort.", "title": "" }, { "docid": "67a898d385d4f8541361f86abc9cc378", "text": "Clustering Web services into functionally similar clusters is a very efficient approach to service discovery. A principal issue for clustering is computing the semantic similarity between services. Current approaches use similarity-distance measurement methods such as keyword, information-retrieval or ontology based methods. These approaches have problems that include discovering semantic characteristics, loss of semantic information and a shortage of high-quality ontologies. In this paper, the authors present a method that first adopts ontology learning to generate ontologies via the hidden semantic patterns existing within complex terms. If calculating similarity using the generated ontology fails, it then applies an information-retrieval-based method. Another important issue is identifying the most suitable cluster representative. This paper proposes an approach to identifying the cluster center by combining service similarity with term frequency–inverse document frequency values of service names. Experimental results show that our term-similarity approach outperforms comparable existing approaches. They also demonstrate the positive effects of our cluster-center identification approach. Web Service Clustering using a Hybrid Term-Similarity Measure with Ontology Learning", "title": "" }, { "docid": "8792d60d2fd12a407091e7dc4e31ebaf", "text": "Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning.", "title": "" }, { "docid": "eb6675c6a37aa6839fa16fe5d5220cfb", "text": "In this paper, we propose an efficient method to detect the underlying structures in data. The same as RANSAC, we randomly sample MSSs (minimal size samples) and generate hypotheses. Instead of analyzing each hypothesis separately, the consensus information in all hypotheses is naturally fused into a hypergraph, called random consensus graph, with real structures corresponding to its dense subgraphs. The sampling process is essentially a progressive refinement procedure of the random consensus graph. Due to the huge number of hyperedges, it is generally inefficient to detect dense subgraphs on random consensus graphs. To overcome this issue, we construct a pairwise graph which approximately retains the dense subgraphs of the random consensus graph. The underlying structures are then revealed by detecting the dense subgraphs of the pair-wise graph. Since our method fuses information from all hypotheses, it can robustly detect structures even under a small number of MSSs. The graph framework enables our method to simultaneously discover multiple structures. Besides, our method is very efficient, and scales well for large scale problems. Extensive experiments illustrate the superiority of our proposed method over previous approaches, achieving several orders of magnitude speedup along with satisfactory accuracy and robustness.", "title": "" }, { "docid": "47c723b0c41fb26ed7caa077388e2e1b", "text": "Automatic dependent surveillance-broadcast (ADS-B) is the communications protocol currently being rolled out as part of next-generation air transportation systems. As the heart of modern air traffic control, it will play an essential role in the protection of two billion passengers per year, in addition to being crucial to many other interest groups in aviation. The inherent lack of security measures in the ADS-B protocol has long been a topic in both the aviation circles and in the academic community. Due to recently published proof-of-concept attacks, the topic is becoming ever more pressing, particularly with the deadline for mandatory implementation in most airspaces fast approaching. This survey first summarizes the attacks and problems that have been reported in relation to ADS-B security. Thereafter, it surveys both the theoretical and practical efforts that have been previously conducted concerning these issues, including possible countermeasures. In addition, the survey seeks to go beyond the current state of the art and gives a detailed assessment of security measures that have been developed more generally for related wireless networks such as sensor networks and vehicular ad hoc networks, including a taxonomy of all considered approaches.", "title": "" }, { "docid": "3abcfd48703b399404126996ca837f90", "text": "Various inductive loads used in all industries deals with the problem of power factor improvement. Capacitor bank connected in shunt helps in maintaining the power factor closer to unity. They improve the electrical supply quality and increase the efficiency of the system. Also the line losses are also reduced. Shunt capacitor banks are less costly and can be installed anywhere. This paper deals with shunt capacitor bank designing for power factor improvement considering overvoltages for substation installation. Keywords— Capacitor Bank, Overvoltage Consideration, Power Factor, Reactive Power", "title": "" }, { "docid": "bb5092ba6da834b3c5ebd8483ab5e9f0", "text": "Wireless Sensor Networks (WSNs) are a promising technology with applications in many areas such as environment monitoring, agriculture, the military field or health-care, to name but a few. Unfortunately, the wireless connectivity of the sensors opens doors to many security threats, and therefore, cryptographic solutions must be included on-board these devices and preferably in their design phase. In this vein, Random Number Generators (RNGs) play a critical role in security solutions such as authentication protocols or key-generation algorithms. In this article is proposed an avant-garde proposal based on the cardiac signal generator we carry with us (our heart), which can be recorded with medical or even low-cost sensors with wireless connectivity. In particular, for the extraction of random bits, a multi-level decomposition has been performed by wavelet analysis. The proposal has been tested with one of the largest and most publicly available datasets of electrocardiogram signals (202 subjects and 24 h of recording time). Regarding the assessment, the proposed True Random Number Generator (TRNG) has been tested with the most demanding batteries of statistical tests (ENT, DIEHARDERand NIST), and this has been completed with a bias, distinctiveness and performance analysis. From the analysis conducted, it can be concluded that the output stream of our proposed TRNG behaves as a random variable and is suitable for securing WSNs.", "title": "" }, { "docid": "cf341e272dcc4773829f09e36a0519b3", "text": "Malicious Web sites are a cornerstone of Internet criminal activities. The dangers of these sites have created a demand for safeguards that protect end-users from visiting them. This article explores how to detect malicious Web sites from the lexical and host-based features of their URLs. We show that this problem lends itself naturally to modern algorithms for online learning. Online algorithms not only process large numbers of URLs more efficiently than batch algorithms, they also adapt more quickly to new features in the continuously evolving distribution of malicious URLs. We develop a real-time system for gathering URL features and pair it with a real-time feed of labeled URLs from a large Web mail provider. From these features and labels, we are able to train an online classifier that detects malicious Web sites with 99% accuracy over a balanced dataset.", "title": "" }, { "docid": "471bb6ffa65dac100e59837df9f57540", "text": "Given the existence of many change detection algorithms, each with its own peculiarities and strengths, we propose a combination strategy, that we termed IUTIS (In Unity There Is Strength), based on a genetic Programming framework. This combination strategy is aimed at leveraging the strengths of the algorithms and compensate for their weakness. In this paper we show our findings in applying the proposed strategy in two different scenarios. The first scenario is purely performance-based. The second scenario performance and efficiency must be balanced. Results demonstrate that starting from simple algorithms we can achieve comparable results with respect to more complex state-of-the-art change detection algorithms, while keeping the computational complexity affordable for real-time applications.", "title": "" }, { "docid": "2d105fcec4109a6bc290c616938012f3", "text": "One of the biggest challenges in automated driving is the ability to determine the vehicleâĂŹs location in realtime - a process known as self-localization or ego-localization. An automated driving system must be reliable under harsh conditions and environmental uncertainties (e.g. GPS denial or imprecision), sensor malfunction, road occlusions, poor lighting, and inclement weather. To cope with this myriad of potential problems, systems typically consist of a GPS receiver, in-vehicle sensors (e.g. cameras and LiDAR devices), and 3D High-Definition (3D HD) Maps. In this paper, we review state-of-the-art self-localization techniques, and present a benchmark for the task of image-based vehicle self-localization. Our dataset was collected on 10km of the Warren Freeway in the San Francisco Area under reasonable traffic and weather conditions. As input to the localization process, we provide timestamp-synchronized, consumer-grade monocular video frames (with camera intrinsic parameters), consumer-grade GPS trajectory, and production-grade 3D HD Maps. For evaluation, we provide survey-grade GPS trajectory. The goal of this dataset is to standardize and formalize the challenge of accurate vehicle self-localization and provide a benchmark to develop and evaluate algorithms.", "title": "" }, { "docid": "d35dc7e653dbe5dca7e1238ea8ced0a5", "text": "Temperature-aware computing is becoming more important in design of computer systems as power densities are increasing and the implications of high operating temperatures result in higher failure rates of components and increased demand for cooling capability. Computer architects and system software designers need to understand the thermal consequences of their proposals, and develop techniques to lower operating temperatures to reduce both transient and permanent component failures. Recognizing the need for thermal modeling tools to support those researches, there has been work on modeling temperatures of processors at the micro-architectural level which can be easily understood and employed by computer architects for processor designs. However, there is a dearth of such tools in the academic/research community for undertaking architectural/systems studies beyond a processor - a server box, rack or even a machine room. In this paper we presents a detailed 3-dimensional computational fluid dynamics based thermal modeling tool, called ThermoStat, for rack-mounted server systems. We conduct several experiments with this tool to show how different load conditions affect the thermal profile, and also illustrate how this tool can help design dynamic thermal management techniques. We propose reactive and proactive thermal management for rack mounted server and isothermal workload distribution for rack.", "title": "" }, { "docid": "b78f935622b143bbbcaff580ba42e35d", "text": "A churn is defined as the loss of a user in an online social network (OSN). Detecting and analyzing user churn at an early stage helps to provide timely delivery of retention solutions (e.g., interventions, customized services, and better user interfaces) that are useful for preventing users from churning. In this paper we develop a prediction model based on a clustering scheme to analyze the potential churn of users. In the experiment, we test our approach on a real-name OSN which contains data from 77,448 users. A set of 24 attributes is extracted from the data. A decision tree classifier is used to predict churn and non-churn users of the future month. In addition, k-means algorithm is employed to cluster the actual churn users into different groups with different online social networking behaviors. Results show that the churn and nonchurn prediction accuracies of ∼65% and ∼77% are achieved respectively. Furthermore, the actual churn users are grouped into five clusters with distinguished OSN activities and some suggestions of retaining these users are provided.", "title": "" }, { "docid": "81bfa44ec29532d07031fa3b74ba818d", "text": "We propose a recurrent extension of the Ladder networks [22] whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.", "title": "" }, { "docid": "84f5ab1dfcf6e03241fd72d3e76179f5", "text": "The goal of this work is to develop a meeting transcription system that can recognize speech even when utterances of different speakers are overlapped. While speech overlaps have been regarded as a major obstacle in accurately transcribing meetings, a traditional beamformer with a single output has been exclusively used because previously proposed speech separation techniques have critical constraints for application to real meetings. This paper proposes a new signal processing module, called an unmixing transducer, and describes its implementation using a windowed BLSTM. The unmixing transducer has a fixed number, say J, of output channels, where J may be different from the number of meeting attendees, and transforms an input multi-channel acoustic signal into J time-synchronous audio streams. Each utterance in the meeting is separated and emitted from one of the output channels. Then, each output signal can be simply fed to a speech recognition back-end for segmentation and transcription. Our meeting transcription system using the unmixing transducer outperforms a system based on a stateof-the-art neural mask-based beamformer by 10.8%. Significant improvements are observed in overlapped segments. To the best of our knowledge, this is the first report that applies overlapped speech recognition to unconstrained real meeting audio.", "title": "" }, { "docid": "725a6313495f71c66ec0a2b895887676", "text": "Schedulers used by modern OSs (e.g., Oracle Solaris 11™ and GNU/Linux) balance load by balancing the number of threads in run queues of different cores. While this approach is effective for a single CPU multicore system, we show that it can lead to a significant load imbalance across CPUs of a multi-CPU multicore system. Because different threads of a multithreaded application often exhibit different levels of CPU utilization, load cannot be measured in terms of the number of threads alone. We propose Tumbler that migrates the threads of a multithreaded program across multiple CPUs to balance the load across the CPUs. While Tumbler distributes the threads equally across the CPUs, its assignment of threads to CPUs is aimed at minimizing the variation in utilization of different CPUs to achieve load balance. We evaluated Tumbler using a wide variety of 35 multithreaded applications, and our experimental results show that Tumbler outperforms both Oracle Solaris 11™ and GNU/Linux.", "title": "" } ]
scidocsrr
26e69880a7476a3dbfac7c675643f30b
Contextual deep CNN based hyperspectral classification
[ { "docid": "dc83550afd690e371283428647ed806e", "text": "Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.", "title": "" } ]
[ { "docid": "18140fdf4629a1c7528dcd6060f427c3", "text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.", "title": "" }, { "docid": "384f7f309e996d4cd289228a3f368d93", "text": "With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.", "title": "" }, { "docid": "b40b81e25501b08a07c64f68c851f4a6", "text": "Variable reluctance (VR) resolver is widely used in traction motor for battery electric vehicle as well as hybrid electric vehicle as a rotor position sensor. VR resolver generates absolute position signal by using resolver-to-digital converter (RDC) in order to deliver exact position of permanent magnets in a rotor of traction motor to motor controller. This paper deals with fault diagnosis of VR resolver by using co-simulation analysis with RDC for position angle detection. As fault conditions, eccentricity of VR resolver, short circuit condition of excitation coil and output signal coils, and material problem of silicon steel in a view point of permeability are considered. 2D FEM is used for the output signal waveforms of SIN, COS and these waveforms are converted into absolute position angle by using the algorithm of RDC. For the verification of proposed analysis results, experiment on fault conditions was conducted and compared with simulation ones.", "title": "" }, { "docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2", "text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).", "title": "" }, { "docid": "a89cd3351d6a427d18a461893949e0d7", "text": "Touch is a powerful vehicle for communication between humans. The way we touch (how) embraces and mediates certain emotions such as anger, joy, fear, or love. While this phenomenon is well explored for human interaction, HCI research is only starting to uncover the fine granularity of sensory stimulation and responses in relation to certain emotions. Within this paper we present the findings from a study exploring the communication of emotions through a haptic system that uses tactile stimulation in mid-air. Here, haptic descriptions for specific emotions (e.g., happy, sad, excited, afraid) were created by one group of users to then be reviewed and validated by two other groups of users. We demonstrate the non-arbitrary mapping between emotions and haptic descriptions across three groups. This points to the huge potential for mediating emotions through mid-air haptics. We discuss specific design implications based on the spatial, directional, and haptic parameters of the created haptic descriptions and illustrate their design potential for HCI based on two design ideas.", "title": "" }, { "docid": "9826dcd8970429b1f3398128eec4335b", "text": "This article provides an overview of recent contributions to the debate on the ethical use of previously collected biobank samples, as well as a country report about how this issue has been regulated in Spain by means of the new Biomedical Research Act, enacted in the summer of 2007. By contrasting the Spanish legal situation with the wider discourse of international bioethics, we identify and discuss a general trend moving from the traditional requirements of informed consent towards new models more favourable to research in a post-genomic context.", "title": "" }, { "docid": "5daa3e5ed4e26184e4d5c7b967fac58d", "text": "Keyphrase extraction from a given document is a difficult task that requires not only local statistical information but also extensive background knowledge. In this paper, we propose a graph-based ranking approach that uses information supplied by word embedding vectors as the background knowledge. We first introduce a weighting scheme that computes informativeness and phraseness scores of words using the information supplied by both word embedding vectors and local statistics. Keyphrase extraction is performed by constructing a weighted undirected graph for a document, where nodes represent words and edges are co-occurrence relations of two words within a defined window size. The weights of edges are computed by the afore-mentioned weighting scheme, and a weighted PageRank algorithm is used to compute final scores of words. Keyphrases are formed in post-processing stage using heuristics. Our work is evaluated on various publicly available datasets with documents of varying length. We show that evaluation results are comparable to the state-of-the-art algorithms, which are often typically tuned to a specific corpus to achieve the claimed results.", "title": "" }, { "docid": "a7f046dcc5e15ccfbe748fa2af400c98", "text": "INTRODUCTION\nSmoking and alcohol use (beyond social norms) by health sciences students are behaviors contradictory to the social function they will perform as health promoters in their eventual professions.\n\n\nOBJECTIVES\nIdentify prevalence of tobacco and alcohol use in health sciences students in Mexico and Cuba, in order to support educational interventions to promote healthy lifestyles and development of professional competencies to help reduce the harmful impact of these legal drugs in both countries.\n\n\nMETHODS\nA descriptive cross-sectional study was conducted using quantitative and qualitative techniques. Data were collected from health sciences students on a voluntary basis in both countries using the same anonymous self-administered questionnaire, followed by an in-depth interview.\n\n\nRESULTS\nPrevalence of tobacco use was 56.4% among Mexican students and 37% among Cuban. It was higher among men in both cases, but substantial levels were observed in women as well. The majority of both groups were regularly exposed to environmental tobacco smoke. Prevalence of alcohol use was 76.9% in Mexican students, among whom 44.4% were classified as at-risk users. Prevalence of alcohol use in Cuban students was 74.1%, with 3.7% classified as at risk.\n\n\nCONCLUSIONS\nThe high prevalence of tobacco and alcohol use in these health sciences students is cause for concern, with consequences not only for their individual health, but also for their professional effectiveness in helping reduce these drugs' impact in both countries.", "title": "" }, { "docid": "565941db0284458e27485d250493fd2a", "text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.", "title": "" }, { "docid": "bd8f4d5181d0b0bcaacfccd6fb0edd8b", "text": "Mass deployment of RF identification (RFID) is hindered by its cost per tag. The main cost comes from the application-specific integrated circuit (ASIC) chip set in a tag. A chipless tag costs less than a cent, and these have the potential for mass deployment for low-cost, item-level tagging as the replacement technology for optical barcodes. Chipless RFID tags can be directly printed on paper or plastic packets just like barcodes. They are highly useful for automatic identification and authentication, supply-chain automation, and medical applications. Among their potential industrial applications are authenticating of polymer bank notes; scanning of credit cards, library cards, and the like; tracking of inventory in retail settings; and identification of pathology and other medical test samples.", "title": "" }, { "docid": "00f1b97c7b948dd4029895a0ad5d577d", "text": "Ship design is a complex endeavor requiring the successful coordination of many different disciplines. According to various disciplines requirements, how to get a balanced performance is imperative in ship design. Thus, a all-in-one Multidisciplinary Design Optimization (MDO) approach is proposed to get the optimum performance of the ship considering three disciplines, structure; cargo loads and power of propulsion. In this research a Latin Hypercube Sampling (LHS) is employed to explore the design space and to sample data for covering the design space. For the purpose of reducing the calculation and saving the develop time, a quadratic Response Surface Method (RSM) is adopted as an approximation model for solving the system design problems. Particle Swarm Optimization (PSO) is introduced to search the appropriate design result in MDO in ship design. Finally, the validity of the proposed approach is proven by a case study of a bulk carrier.", "title": "" }, { "docid": "244dbf0d36d3d221e12b1844d440ecb2", "text": "A typical scene contains many different objects that compete for neural representation due to the limited processing capacity of the visual system. At the neural level, competition among multiple stimuli is evidenced by the mutual suppression of their visually evoked responses and occurs most strongly at the level of the receptive field. The competition among multiple objects can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that biasing signals due to selective attention can modulate neural activity in visual cortex not only in the presence but also in the absence of visual stimulation. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals likely derives from a distributed network of areas in frontal and parietal cortex. Competition suggests that once attentional resources are depleted, no further processing is possible. Yet, existing data suggest that emotional stimuli activate brain regions \"automatically,\" largely immune from attentional control. We tested the alternative possibility, namely, that the neural processing of stimuli with emotional content is not automatic and instead requires some degree of attention. Our results revealed that, contrary to the prevailing view, all brain regions responding differentially to emotional faces, including the amygdala, did so only when sufficient attentional resources were available to process the faces. Thus, similar to the processing of other stimulus categories, the processing of facial expression is under top-down control.", "title": "" }, { "docid": "921840f75f1270bcb148d9a74ff4db58", "text": "Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator’s internal representation, we can effectively modulate the discriminator’s accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. (Video1)", "title": "" }, { "docid": "91bda7070a8f31deb1ae47cae019ea7d", "text": "The existence of an optimal feedback law is established for the risk sensitive optimal control problem with denumerable state space. The main assumptions imposed are irreducibility, and a near monotonicity condition on the one-step cost function. It is found that a solution can be found constructively using either value iteration or policy iteration under suitable conditions on initial feedback law.", "title": "" }, { "docid": "3abc2383539a2bedd478939ced030073", "text": "Evaluation of employee performance is an important element in enhancing the quality of the work and improves employees’ motivation to perform well. It also presents a basis for upgrading and enhancing of an organization. Periodical employees’ performance evaluation in an organization assists management to recognize its strengths and weaknesses. This paper presents a design and implementation of a performance appraisal system using the fuzzy logic. In addition to the normal process of performance evaluation modules, the system contains step by step inference engine processes. These processes demonstrate several calculation details in relations composition and aggregation methods such as min operator, algebraic product, sup-min and sup-product. The system has foundation to add-on analysis module to analyze and report the final result using various similarity measures. MS Access database was used to maintain the data, build the inference logic and develop all setting user interfaces.", "title": "" }, { "docid": "e32db5353519de574b70e33a3498b695", "text": "Reinforcement Learning (RL) provides a promising new approach to systems performance management that differs radically from standard queuing-theoretic approaches making use of explicit system performance models. In principle, RL can automatically learn high-quality management policies without an explicit performance model or traffic model and with little or no built-in system specific knowledge. In our original work [1], [2], [3] we showed the feasibility of using online RL to learn resource valuation estimates (in lookup table form) which can be used to make high-quality server allocation decisions in a multi-application prototype Data Center scenario. The present work shows how to combine the strengths of both RL and queuing models in a hybrid approach in which RL trains offline on data collected while a queuing model policy controls the system. By training offline we avoid suffering potentially poor performance in live online training. We also now use RL to train nonlinear function approximators (e.g. multi-layer perceptrons) instead of lookup tables; this enables scaling to substantially larger state spaces. Our results now show that in both open-loop and closed-loop traffic, hybrid RL training can achieve significant performance improvements over a variety of initial model-based policies. We also find that, as expected, RL can deal effectively with both transients and switching delays, which lie outside the scope of traditional steady-state queuing theory.", "title": "" }, { "docid": "86874d3f1740d709102c00063e53bfa5", "text": "The two dominant schemes for rule-learning, C4.5 and RIPPER, both operate in two stages. First they induce an initial rule set and then they refine it using a rather complex optimization stage that discards (C4.5) or adjusts (RIPPER) individual rules to make them work better together. In contrast, this paper shows how good rule sets can be learned one rule at a time, without any need for global optimization. We present an algorithm for inferring rules by repeatedly generating partial decision trees, thus combining the two major paradigms for rule generation—creating rules from decision trees and the separate-and-conquer rule-learning technique. The algorithm is straightforward and elegant: despite this, experiments on standard datasets show that it produces rule sets that are as accurate as and of similar size to those generated by C4.5, and more accurate than RIPPER’s. Moreover, it operates efficiently, and because it avoids postprocessing, does not suffer the extremely slow performance on pathological example sets for which the C4.5 method has been criticized.", "title": "" }, { "docid": "c5f749c36b3d8af93c96bee59f78efe5", "text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.", "title": "" }, { "docid": "0fc051613dd8ac7b555a85f0ed2cccbc", "text": "BACKGROUND\nAtezolizumab is a humanised antiprogrammed death-ligand 1 (PD-L1) monoclonal antibody that inhibits PD-L1 and programmed death-1 (PD-1) and PD-L1 and B7-1 interactions, reinvigorating anticancer immunity. We assessed its efficacy and safety versus docetaxel in previously treated patients with non-small-cell lung cancer.\n\n\nMETHODS\nWe did a randomised, open-label, phase 3 trial (OAK) in 194 academic or community oncology centres in 31 countries. We enrolled patients who had squamous or non-squamous non-small-cell lung cancer, were 18 years or older, had measurable disease per Response Evaluation Criteria in Solid Tumors, and had an Eastern Cooperative Oncology Group performance status of 0 or 1. Patients had received one to two previous cytotoxic chemotherapy regimens (one or more platinum based combination therapies) for stage IIIB or IV non-small-cell lung cancer. Patients with a history of autoimmune disease and those who had received previous treatments with docetaxel, CD137 agonists, anti-CTLA4, or therapies targeting the PD-L1 and PD-1 pathway were excluded. Patients were randomly assigned (1:1) to intravenously receive either atezolizumab 1200 mg or docetaxel 75 mg/m2 every 3 weeks by permuted block randomisation (block size of eight) via an interactive voice or web response system. Coprimary endpoints were overall survival in the intention-to-treat (ITT) and PD-L1-expression population TC1/2/3 or IC1/2/3 (≥1% PD-L1 on tumour cells or tumour-infiltrating immune cells). The primary efficacy analysis was done in the first 850 of 1225 enrolled patients. This study is registered with ClinicalTrials.gov, number NCT02008227.\n\n\nFINDINGS\nBetween March 11, 2014, and April 29, 2015, 1225 patients were recruited. In the primary population, 425 patients were randomly assigned to receive atezolizumab and 425 patients were assigned to receive docetaxel. Overall survival was significantly longer with atezolizumab in the ITT and PD-L1-expression populations. In the ITT population, overall survival was improved with atezolizumab compared with docetaxel (median overall survival was 13·8 months [95% CI 11·8-15·7] vs 9·6 months [8·6-11·2]; hazard ratio [HR] 0·73 [95% CI 0·62-0·87], p=0·0003). Overall survival in the TC1/2/3 or IC1/2/3 population was improved with atezolizumab (n=241) compared with docetaxel (n=222; median overall survival was 15·7 months [95% CI 12·6-18·0] with atezolizumab vs 10·3 months [8·8-12·0] with docetaxel; HR 0·74 [95% CI 0·58-0·93]; p=0·0102). Patients in the PD-L1 low or undetectable subgroup (TC0 and IC0) also had improved survival with atezolizumab (median overall survival 12·6 months vs 8·9 months; HR 0·75 [95% CI 0·59-0·96]). Overall survival improvement was similar in patients with squamous (HR 0·73 [95% CI 0·54-0·98]; n=112 in the atezolizumab group and n=110 in the docetaxel group) or non-squamous (0·73 [0·60-0·89]; n=313 and n=315) histology. Fewer patients had treatment-related grade 3 or 4 adverse events with atezolizumab (90 [15%] of 609 patients) versus docetaxel (247 [43%] of 578 patients). One treatment-related death from a respiratory tract infection was reported in the docetaxel group.\n\n\nINTERPRETATION\nTo our knowledge, OAK is the first randomised phase 3 study to report results of a PD-L1-targeted therapy, with atezolizumab treatment resulting in a clinically relevant improvement of overall survival versus docetaxel in previously treated non-small-cell lung cancer, regardless of PD-L1 expression or histology, with a favourable safety profile.\n\n\nFUNDING\nF. Hoffmann-La Roche Ltd, Genentech, Inc.", "title": "" }, { "docid": "754163e498679e1d3c1449424c03a71f", "text": "J. K. Strosnider P. Nandi S. Kumaran S. Ghosh A. Arsanjani The current approach to the design, maintenance, and governance of service-oriented architecture (SOA) solutions has focused primarily on flow-driven assembly and orchestration of reusable service components. The practical application of this approach in creating industry solutions has been limited, because flow-driven assembly and orchestration models are too rigid and static to accommodate complex, real-world business processes. Furthermore, the approach assumes a rich, easily configured library of reusable service components when in fact the development, maintenance, and governance of these libraries is difficult. An alternative approach pioneered by the IBM Research Division, model-driven business transformation (MDBT), uses a model-driven software synthesis technology to automatically generate production-quality business service components from high-level business process models. In this paper, we present the business entity life cycle analysis (BELA) technique for MDBT-based SOA solution realization and its integration into serviceoriented modeling and architecture (SOMA), the end-to-end method from IBM for SOA application and solution development. BELA shifts the process-modeling paradigm from one that is centered on activities to one that is centered on entities. BELA teams process subject-matter experts with IT and data architects to identify and specify business entities and decompose business processes. Supporting synthesis tools then automatically generate the interacting business entity service components and their associated data stores and service interface definitions. We use a large-scale project as an example demonstrating the benefits of this innovation, which include an estimated 40 percent project cost reduction and an estimated 20 percent reduction in cycle time when compared with conventional SOA approaches.", "title": "" } ]
scidocsrr
145813504d1b101519db38107b78ed29
Clustervision: Visual Supervision of Unsupervised Clustering
[ { "docid": "83a2be29fde7b74609045472ad785a28", "text": "Clustering is a mostly unsupervised procedure and the majority of the clustering algorithms depend on certain assumptions in order to define the subgroups present in a data set. As a consequence, in most applications the resulting clustering scheme requires some sort of evaluation as regards its validity. In this paper we present a clustering validity procedure, which evaluates the results of clustering algorithms on data sets. We define a validity index, S_Dbw, based on well-defined clustering criteria enabling the selection of the optimal input parameters’ values for a clustering algorithm that result in the best partitioning of a data set. We evaluate the reliability of our index both theoretically and experimentally, considering three representative clustering algorithms ran on synthetic and real data sets. Also, we carried out an evaluation study to compare S_Dbw performance with other known validity indices. Our approach performed favorably in all cases, even in those that other indices failed to indicate the correct partitions in a data set.", "title": "" }, { "docid": "f6266e5c4adb4fa24cc353dccccaf6db", "text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.", "title": "" } ]
[ { "docid": "76efa42a492d8eb36b82397e09159c30", "text": "attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. The first RoboCup competition will be held at the Fifteenth International Joint Conference on Artificial Intelligence in Nagoya, Japan. A robot team must actually perform a soccer game, incorporating various technologies, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup’s final target is a world cup with real robots, RoboCup offers a software platform for research on the software aspects of RoboCup. This article describes technical challenges involved in RoboCup, rules, and the simulation environment.", "title": "" }, { "docid": "5dd65cd56cc9886cd96fca0eeb7cca5d", "text": "We formulate the problem of nonprojective dependency parsing as a polynomial-sized integer linear program. Our formulation is able to handle non-local output features in an efficient manner; not only is it compatible with prior knowledge encoded as hard constraints, it can also learn soft constraints from data. In particular, our model is able to learn correlations among neighboring arcs (siblings and grandparents), word valency, and tendencies toward nearlyprojective parses. The model parameters are learned in a max-margin framework by employing a linear programming relaxation. We evaluate the performance of our parser on data in several natural languages, achieving improvements over existing state-of-the-art methods.", "title": "" }, { "docid": "9754e309c6fb4805618d6ba4c18b5615", "text": "Deep neural networks (NN) are extensively used for machine learning tasks such as image classification, perception and control of autonomous systems. Increasingly, these deep NNs are also been deployed in high-assurance applications. Thus, there is a pressing need for developing techniques to verify neural networks to check whether certain user-expected properties are satisfied. In this paper, we study a specific verification problem of computing a guaranteed range for the output of a deep neural network given a set of inputs represented as a convex polyhedron. Range estimation is a key primitive for verifying deep NNs. We present an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set. In contrast to recently proposed “monolithic” optimization approaches, we use local gradient descent to repeatedly find and eliminate local minima of the function. The final global optimum is certified using a mixed integer programming instance. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.", "title": "" }, { "docid": "054337a29922a1b56d46d1d3f10bc414", "text": "The ability to automatically learn task specific feature representations has led to a huge success of deep learning methods. When large training data is scarce, such as in medical imaging problems, transfer learning has been very effective. In this paper, we systematically investigate the process of transferring a Convolutional Neural Network, trained on ImageNet images to perform image classification, to kidney detection problem in ultrasound images. We study how the detection performance depends on the extent of transfer. We show that a transferred and tuned CNN can outperform a state-of-the-art feature engineered pipeline and a hybridization of these two techniques achieves 20% higher performance. We also investigate how the evolution of intermediate response images from our network. Finally, we compare these responses to state-of-the-art image processing filters in order to gain greater insight into how transfer learning is able to effectively manage widely varying imaging regimes.", "title": "" }, { "docid": "152fc0018ecb2d6d2b69e2a2e2eb6ef9", "text": "This paper examines the relationship between low interests maintained by advanced economy central banks and credit booms in emerging economies. In a model with crossborder banking, low funding rates increase credit supply, but the initial shock is amplified through the “risk-taking channel” of monetary policy where greater risk-taking interact with dampened measured risks that are driven by currency appreciation to create a feedback loop. In an empirical investigation using VAR analysis, we find that expectations of lower short-term rates dampens measured risks and stimulate cross-border banking sector capital flows. JEL Codes: F32, F33, F34", "title": "" }, { "docid": "1566ef8b6b9c21a22d9259e0ff21c71b", "text": "Reusable model design becomes desirable with the rapid expansion of machine learning applications. In this paper, we focus on the reusability of pre-trained deep convolutional models. Specifically, different from treating pre-trained models as feature extractors, we reveal more treasures beneath convolutional layers, i.e., the convolutional activations could act as a detector for the common object in the image colocalization problem. We propose a simple but effective method, named Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of images. Empirical studies validate the effectiveness of the proposed DDT method. On benchmark image co-localization datasets, DDT consistently outperforms existing state-of-the-art methods by a large margin. Moreover, DDT also demonstrates good generalization ability for unseen categories and robustness for dealing with noisy data.", "title": "" }, { "docid": "3c4e1c7fd5dbdf5ea50eeed1afe23ff9", "text": "Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.", "title": "" }, { "docid": "bec66d4d576f2c5c5643ffe4b72ab353", "text": "Many cities suffer from noise pollution, which compromises people's working efficiency and even mental health. New York City (NYC) has opened a platform, entitled 311, to allow people to complain about the city's issues by using a mobile app or making a phone call; noise is the third largest category of complaints in the 311 data. As each complaint about noises is associated with a location, a time stamp, and a fine-grained noise category, such as \"Loud Music\" or \"Construction\", the data is actually a result of \"human as a sensor\" and \"crowd sensing\", containing rich human intelligence that can help diagnose urban noises. In this paper we infer the fine-grained noise situation (consisting of a noise pollution indicator and the composition of noises) of different times of day for each region of NYC, by using the 311 complaint data together with social media, road network data, and Points of Interests (POIs). We model the noise situation of NYC with a three dimension tensor, where the three dimensions stand for regions, noise categories, and time slots, respectively. Supplementing the missing entries of the tensor through a context-aware tensor decomposition approach, we recover the noise situation throughout NYC. The information can inform people and officials' decision making. We evaluate our method with four real datasets, verifying the advantages of our method beyond four baselines, such as the interpolation-based approach.", "title": "" }, { "docid": "b020317d54fd5b005f1b1e3b963eab36", "text": "Augmented reality (AR) applications have recently become popular on modern smartphones. We explore the effectiveness of this mobile AR technology in the context of grocery shopping, in particular as a means to assist shoppers in making healthier decisions as they decide which grocery products to buy. We construct an AR-assisted mobile grocery-shopping application that makes real-time, customized recommendations of healthy products to users and also highlights products to avoid for various types of health concerns, such as allergies to milk or nut products, low-sodium or low-fat diets, and general caloric intake. We have implemented a prototype of this AR-assisted mobile grocery shopping application and evaluated its effectiveness in grocery store aisles. Our application's evaluation with typical grocery shoppers demonstrates that AR overlay tagging of products reduces the search time to find healthy food items, and that coloring the tags helps to improve the user's ability to quickly and easily identify recommended products, as well as products to avoid. We have evaluated our application's functionality by analyzing the data we collected from 15 in-person actual grocery-shopping subjects and 104 online application survey participants.", "title": "" }, { "docid": "8008ced1bfb2e7417d9f7bbb1d382fe0", "text": "We describe an approach for analysing and attacking the physical part (a process) of a cyber-physical system. The stages of this approach are demonstrated in a case study, a simulation of a vinyl acetate monomer plant. We want to demonstrate in particular where security has to rely on expert knowledge in the domain of the physical components and processes of a system and that there are major challenges for converting cyber attacks into successful cyber-physical attacks.", "title": "" }, { "docid": "6f94fd155f3689ab1a6b242243b13e09", "text": "Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.", "title": "" }, { "docid": "567a329d4bdae315dddaabc165cf38eb", "text": "We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence ŷ = {y0 . . . yT }, by maximizing p(y|x) = ∏ t p(yt|x; {y0 . . . yt−1}). Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model’s output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.", "title": "" }, { "docid": "fd111c4f99c0fe9d8731385f6c7eb04f", "text": "We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks—the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance.", "title": "" }, { "docid": "0e0a845f1342e1491cd76717fa9eaa70", "text": "The relationships between the work products of a security engineering process can be hard to understand, even for persons with a strong technical background but little knowledge of security engineering. Market forces are driving software practitioners who are not security specialists to develop software that requires security features. When these practitioners develop software solutions without appropriate security-specific processes and models, they sometimes fail to produce effective solutions. We have adapted a proven object-oriented modeling technique, use cases, to capture and analyze security requirements in a simple way. We call the adaptation an abuse case model. Its relationship to other security engineering work products is relatively simple, from a", "title": "" }, { "docid": "39d073029716355745710b57cb6d692d", "text": "Graphics standards are receiving increased attention in the computer graphics community as more people write programs that use 3D graphics and as those already possessing 3D graphical programs want those programs to run on a variety of computers. OpenGL is an emerging graphics standard that provides advanced rendering features while maintaining a simple programming model. Its procedural interface allows a graphics programmer to describe rendering tasks, whether simple or complex, easily and eeciently. Because OpenGL is rendering-only, it can be incorporated into any window system (and has been, into the X Window System and the soon-to-be-released Windows NT) or can be used without a window system. Finally, OpenGL is designed so that it can be implemented to take advantage of a wide range of graphics hardware capabilities, from a basic framebuuer to the most sophisticated graphics subsystems.", "title": "" }, { "docid": "aaec79a58537f180aba451ea825ed013", "text": "In my March 2006 CACM article I used the term \" computational thinking \" to articulate a vision that everyone, not just those who major in computer science, can benefit from thinking like a computer scientist [Wing06]. So, what is computational thinking? Here is a definition that Jan use; it is inspired by an email exchange I had with Al Aho of Columbia University: Computational Thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent [CunySnyderWing10] Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine, or more generally, by combinations of humans and machines. When I use the term computational thinking, my interpretation of the words \" problem \" and \" solution \" is broad; in particular, I mean not just mathematically well-defined problems whose solutions are completely analyzable, e.g., a proof, an algorithm, or a program, but also real-world problems whose solutions might be in the form of large, complex software systems. Thus, computational thinking overlaps with logical thinking and systems thinking. It includes algorithmic thinking and parallel thinking, which in turn engage other kinds of thought processes, e.g., compositional reasoning, pattern matching, procedural thinking, and recursive thinking. Computational thinking is used in the design and analysis of problems and their solutions, broadly interpreted. The most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types. Abstraction gives us the power to scale and deal with complexity. Recursively applying abstraction gives us the ability to build larger and larger systems, with the base case (at least for computer science) being bits (0's …", "title": "" }, { "docid": "9db388f2564a24f58d8ea185e5b514be", "text": "Analyzing large volumes of log events without some kind of classification is undoable nowadays due to the large amount of events. Using AI to classify events make these log events usable again. With the use of the Keras Deep Learning API, which supports many Optimizing Stochastic Gradient Decent algorithms, better known as optimizers, this research project tried these algorithms in a Long Short-Term Memory (LSTM) network, which is a variant of the Recurrent Neural Networks. These algorithms have been applied to classify and update event data stored in Elastic-Search. The LSTM network consists of five layers where the output layer is a Dense layer using the Softmax function for evaluating the AI model and making the predictions. The Categorical Cross-Entropy is the algorithm used to calculate the loss. For the same AI model, different optimizers have been used to measure the accuracy and the loss. Adam was found as the best choice with an accuracy of 29,8%.", "title": "" }, { "docid": "335a551d08afd6af7d90b35b2df2ecc4", "text": "The interpretation of colonic biopsies related to inflammatory conditions can be challenging because the colorectal mucosa has a limited repertoire of morphologic responses to various injurious agents. Only few processes have specific diagnostic features, and many of the various histological patterns reflect severity and duration of the disease. Importantly the correlation with endoscopic and clinical information is often cardinal to arrive at a specific diagnosis in many cases.", "title": "" }, { "docid": "5054ad32c33dc2650c1dcee640961cd5", "text": "Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009; ). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Activepixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as R&D prototypes. Nonetheless, there are several datasets already available; see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Pérez-Carrasco et al., 2013; O’Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted", "title": "" }, { "docid": "55a303fce18db3d87747b8d3eaccc544", "text": "Although organizations are using more virtual teams to accomplish work, they are finding it difficult to use traditional forms of leadership to manage these teams. Many organizations are encouraging a shared leadership approach over the traditional individual leader. Yet, there have been only a few empirical studies directly examining the effectiveness of such an approach and none have taken into account the team diversity. To address this gap, this paper reports the results of an empirical examination of the impacts of shared leadership in virtual teams. Results confirm the proposed research model. The impacts of shared leadership are multilevel and vary by race and gender. In addition, while shared leadership promotes team satisfaction despite prior assumptions, it actually reduces rather than increases team performance.", "title": "" } ]
scidocsrr
f3cbce002cd7f327c1e212e2a6c52d37
MTNet : A Neural Approach for Cross-Domain Recommendation with Unstructured Text
[ { "docid": "40fe24e70fd1be847e9f89b82ff75b28", "text": "Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multitask learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.", "title": "" }, { "docid": "02e3ce674a40204d830f12164215cfbd", "text": "Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories: feature learning approach, low-rank approach, task clustering approach, task relation learning approach, dirty approach, multi-level approach and deep learning approach. In order to compare different approaches, we discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as feature hashing are reviewed to reveal the computational and storage advantages. Many real-world applications use MTL to boost their performance and we introduce some representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.", "title": "" } ]
[ { "docid": "f95ac9c90ad4f5a3c08924f9aa24ca20", "text": "The Semantic Web is an extension of the current web in which information is given well-defined meaning. The perspective of Semantic Web is to promote the quality and intelligence of the current web by changing its contents into machine understandable form. Therefore, semantic level information is one of the cornerstones of the Semantic Web. The process of adding semantic metadata to web resources is called Semantic Annotation. There are many obstacles against the Semantic Annotation, such as multilinguality, scalability, and issues which are related to diversity and inconsistency in content of different web pages. Due to the wide range of domains and the dynamic environments that the Semantic Annotation systems must be performed on, the problem of automating annotation process is one of the significant challenges in this domain. To overcome this problem, different machine learning approaches such as supervised learning, unsupervised learning and more recent ones like, semi-supervised learning and active learning have been utilized. In this paper we present an inclusive layered classification of Semantic Annotation challenges and discuss the most important issues in this field. Also, we review and analyze machine learning applications for solving semantic annotation problems. For this goal, the article tries to closely study and categorize related researches for better understanding and to reach a framework that can map machine learning techniques into the Semantic Annotation challenges and requirements.", "title": "" }, { "docid": "b03e24b5c9491ff650ee1ffa587731b5", "text": "Malicious OS kernel can easily access user's private data in main memory and pries human-machine interaction data, even one that employs privacy enforcement based on application level or OS level. This paper introduces AppSec, a hypervisor-based safe execution environment, to protect both the memory data and human-machine interaction data of security sensitive applications from the untrusted OS transparently.\n AppSec provides several security mechanisms on an untrusted OS. AppSec introduces a safe loader to check the code integrity of application and dynamic shared objects. During runtime, AppSec protects application and dynamic shared objects from being modified and verifies kernel memory accesses according to application's intention. AppSec provides a devices isolation mechanism to prevent the human-machine interaction devices being accessed by compromised kernel. On top of that, AppSec further provides a privileged-based window system to protect application's X resources. The major advantages of AppSec are threefold. First, AppSec verifies and protects all dynamic shared objects during runtime. Second, AppSec mediates kernel memory access according to application's intention but not encrypts all application's data roughly. Third, AppSec provides a trusted I/O path from end-user to application. A prototype of AppSec is implemented and shows that AppSec is efficient and practical.", "title": "" }, { "docid": "7dde8fad4448a27a38b6dd5f6d41617f", "text": "We address the problem of making general video game playing agents play in a human-like manner. To this end, we introduce several modifications of the UCT formula used in Monte Carlo Tree Search that biases action selection towards repeating the current action, making pauses, and limiting rapid switching between actions. Playtraces of human players are used to model their propensity for repeated actions; this model is used for biasing the UCT formula. Experiments show that our modified MCTS agent, called BoT, plays quantitatively similar to human players as measured by the distribution of repeated actions. A survey of human observers reveals that the agent exhibits human-like playing style in some games but not others.", "title": "" }, { "docid": "596bb1265a375c68f0498df90f57338e", "text": "The concept of unintended pregnancy has been essential to demographers in seeking to understand fertility, to public health practitioners in preventing unwanted childbear-ing and to both groups in promoting a woman's ability to determine whether and when to have children. Accurate measurement of pregnancy intentions is important in understanding fertility-related behaviors, forecasting fertility, estimating unmet need for contraception, understanding the impact of pregnancy intentions on maternal and child health, designing family planning programs and evaluating their effectiveness, and creating and evaluating community-based programs that prevent unintended pregnancy. 1 Pregnancy unintendedness is a complex concept, and has been the subject of recent conceptual and method-ological critiques. 2 Pregnancy intentions are increasingly viewed as encompassing affective, cognitive, cultural and contextual dimensions. Developing a more complete understanding of pregnancy intentions should advance efforts to increase contraceptive use, to prevent unintended pregnancies and to improve the health of women and their children. To provide a scientific foundation for public health efforts to prevent unintended pregnancy, we conducted a review of unintended pregnancy between the fall of 1999 and the spring of 2001 as part of strategic planning activities within the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC). We reviewed the published and unpublished literature, consulted with experts in reproductive health and held several joint meetings with the Demographic and Behavioral Research Branch of the National Institute of Child Health and Human Development , and the Office of Population Affairs of the Department of Health and Human Services. We used standard scientific search engines, such as Medline, to find relevant articles published since 1975, and identified older references from bibliographies contained in recent articles; academic experts and federal officials helped to identify unpublished reports. This comment summarizes our findings and incorporates insights gained from the joint meetings and the strategic planning process. CURRENT DEFINITIONS AND MEASURES Conventional measures of unintended pregnancy are designed to reflect a woman's intentions before she became pregnant. 3 Unintended pregnancies are pregnancies that are reported to have been either unwanted (i.e., they occurred when no children, or no more children, were desired) or mistimed (i.e., they occurred earlier than desired). In contrast, pregnancies are described as intended if they are reported to have happened at the \" right time \" 4 or later than desired (because of infertility or difficulties in conceiving). A concept related to unintended pregnancy is unplanned pregnancy—one that occurred when …", "title": "" }, { "docid": "c1a371bbab3a931b5686d50cf2fc3980", "text": "Children with developmental language or speech disorders frequently benefit from augmentative and alternative communication (AAC) strategies. These children have severe expressive or receptive communication disorders or both which sometimes occur in isolation, or as part of a global developmental disability. Children with specific language impairment, pervasive developmental disorder, developmental apraxia of speech, autism, Down syndrome, or other types of developmental disabilities may need to use AAC strategies to supplement or enhance their language development. These children offer challenges to professionals, especially during the early years of language development. In the very young child, it is often difficult to determine the nature and degree of language impairment, to accurately diagnose the presence of other factors such as cognitive disabilities, and to predict the child's future prognosis for language or speech development. In the past, young children diagnosed with severe language and speech disorders would have eceived years of traditional speech therapy focused on developing spoken communication skills (Silverman, 1995). AAC would have been recommended only after traditional therapy techniques had failed. Today, professionals realize that AAC strategies can provide children who have developmental delays with an immediate means of communication; can facilitate expressive and receptive language development until other communication modalities improve (i.e., speech); and can serve as a bridge to future spoken language development (Kangas & Lloyd, 1988; Silverman 1995). AAC provides an expressive method of communication to facilitate language development in children who, in all likelihood, will eventually use speech to communicate. The focus of this chapter is on children with developmental disabilities who have the potential to use AAC strategies to develop receptive and expressive language skills, including speech. The chapter begins by presenting an overview of using AAC as a tool to enhance language and speech development. Specific AAC assessment and intervention issues are then presented in two separate sections for children with developmental apraxia of speech and for children with pervasive developmental disorders, including autism.", "title": "" }, { "docid": "0c34e8355f1635b3679159abd0a82806", "text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.", "title": "" }, { "docid": "f410b56e4e0e16fe375538eed94d72e0", "text": "This paper presents a proposed Flipped Voltage Follower (FVF) based output capacitorless low-dropout (OCL-LDO) regulator using Dual-Summed Miller Frequency Compensation (DSMFC) technique. Validated by UMC 65-nm CMOS process, the simulation results have shown that the proposed LDO regulator can be stabilized by a total compensation capacitance (CC) of 8 pF for a load capacitance (CL) ranging from 10 pF to 10 nF. It consumes 23.7 μA quiescent current with a 1.2 V supply voltage. With a dropout voltage of 200 mV, the LDO regulator can support a maximum 50 mA load current. It can settle in less than 1.7 μs with a 1% accuracy for the whole CL range. The proposed LDO regulator is comparable to other reported works in terms of figure-of-merit (FOM). Most significantly, it can drive the widest range of CL and achieve the highest CL(max)/CC ratio with respect to the counterparts.", "title": "" }, { "docid": "eaf618e514aa4714519eb88a44f27937", "text": "Many cellular stresses activate senescence, a persistent hyporeplicative state characterized in part by expression of the p16INK4a cell-cycle inhibitor. Senescent cell production occurs throughout life and plays beneficial roles in a variety of physiological and pathological processes including embryogenesis, wound healing, host immunity, and tumor suppression. Meanwhile, the steady accumulation of senescent cells with age also has adverse consequences. These non-proliferating cells occupy key cellular niches and elaborate pro-inflammatory cytokines, contributing to aging-related diseases and morbidity. This model suggests that the abundance of senescent cells in vivo predicts \"molecular,\" as opposed to chronologic, age and that senescent cell clearance may mitigate aging-associated pathology.", "title": "" }, { "docid": "393513f676132d333bb1ebff884da7b7", "text": "This paper reports an investigation of some methods for isolating, or segmenting, characters during the reading of machineprinted text by optical character recognition systems. Two new segmentation algorithms using feature extraction techniques are presented; both are intended for use in the recognition of machine-printed lines of lo-, 11and 12-pitch serif-type multifont characters. One of the methods, called quasi-topological segmentation, bases the decision to “section” a character on a combination of featureextraction and character-width measurements. The other method, topological segmentation, involves feature extraction alone. The algorithms have been tested with an evaluation method that is independent of any particular recognition system. Test results are based on application of the algorithm to upper-case alphanumeric characters gathered from print sources that represent the existing world of machine printing. The topological approach demonstrated better performance on the test data than did the quasitopological approach. Introduction When character recognition systems are structured to recognize one character at a time, some means must be provided to divide the incoming data stream into segments that define the beginning and end of each character. Writing about this aspect of pattern recognition in his review article, G. Nagy [l] stated that “object isolation is all too often ignored in laboratory studies. Yet touching characters are responsible for the majority of errors in the automatic reading of both machine-printed and hand-printed text. . . . ” The importance of the touching-character problem in the design of practical character recognition machines motivated the laboratory study reported in this paper. We present two new algorithms for separating upper-case serif characters, develop a general philosophy for evaluating the effectiveness of segmentation algorithms, and evaluate the performance of our algorithms when they are applied to lo-, 11and 12-pitch alphanumeric characters. The segmentation algorithms were developed specifically for potential use with recognition systems that use a raster-type scanner to produce an analog video signal that is digitized before presentation of the data to the recognition logic. The raster is assumed to move from right to left across a line of printed characters and to make approximately 20 vertical scans per character. This approach to recognition technology is the one most commonly used in IBM’s current optical character recognition machines. A paper on the IBM 1975 Optical Page Reader [2] gives one example of how the approach has been implemented. Other approaches to recognition technology may not require that decisions be made to identify the beginning and end of characters. Nevertheless, the performance of any recognition system is affected by the presence of touching characters and the design of recognition algorithms must take the problem into account (see Clayden, Clowes and Parks [3]). Simple character recognition systeMs of the type we are concerned with perform segmentation by requiring that bit patterns of characters be separated by scans containing no “black” bits. However, this method is rarely adequate to separate characters printed in the common business-machine and typewriter fonts. These fonts, after all, were not designed with machine recognition in mind; but they are nevertheless the fonts it is most desirable for a machine to be able to recognize. In the 12-pitch, serif-type fonts examined for the present study, up to 35 percent of the segments occurred not at blank scans, but within touching character pairs. 153 SEGMENTATION ALGORITHMS MARCH 1971", "title": "" }, { "docid": "184596076bf83518c3cf3f693e62cad7", "text": "High-K (HK) and Metal-Gate (MG) transistor reliability is very challenging both from the standpoint of introduction of new materials and requirement of higher field of operation for higher performance. In this paper, key and unique HK+MG intrinsic transistor reliability mechanisms observed on 32nm logic technology generation is presented. We'll present intrinsic reliability similar to or better than 45nm generation.", "title": "" }, { "docid": "5c7d3aa4a0ffb2d67d6454e63b8d94a8", "text": "The micro-architecture of the substantia nigra was studied in control cases of varying age and patients with parkinsonism. A single 7 mu section stained with haematoxylin and eosin was examined at a specific level within the caudal nigra using strict criteria. The pars compacta was divided into a ventral and a dorsal tier, and each tier was further subdivided into 3 regions. In 36 control cases there was a linear fallout of pigmented neurons with advancing age in the pars compacta of the caudal substantia nigra at a rate of 4.7% per decade. Regionally, the lateral ventral tier was relatively spared (2.1% loss per decade) compared with the medial ventral tier (5.4%) and the dorsal tier (6.9%). In 20 Parkinson's disease (PD) cases of varying disease duration there was an exponential loss of pigmented neurons with a 45% loss in the first decade. Regionally, the pattern was opposite to ageing. Loss was greatest in the lateral ventral tier (average loss 91%) followed by the medial ventral tier (71%) and the dorsal tier (56%). The presymptomatic phase of PD from the onset of neuronal loss was estimated to be about 5 yrs. This phase is represented by incidental Lewy body cases: individuals who die without clinical signs of PD or dementia, but who are found to have Lewy bodies at post-mortem. In 7 cases cell loss was confined to the lateral ventral tier (average loss 52%) congruent with the lateral ventral selectivity of symptomatic PD. It was calculated that at the onset of symptoms there was a 68% cell loss in the lateral ventral tier and a 48% loss in the caudal nigra as a whole. The regional selectivity of PD is relatively specific. In 15 cases of striatonigral degeneration the distribution of cell loss was similar, but the loss in the dorsal tier was greater than PD by 21%. In 14 cases of Steele-Richardson-Olszewski syndrome (SRO) there was no predilection for the lateral ventral tier, but a tendency to involve the medial nigra and spare the lateral. These findings suggest that age-related attrition of pigmented nigral cells is not an important factor in the pathogenesis of PD.", "title": "" }, { "docid": "63d9f909fe0d5d614fd13b8a6676fab3", "text": "Awareness of other vehicle's intention may help human drivers or autonomous vehicles judge the risk and avoid traffic accidents. This paper proposed an approach to predicting driver's intentions using Hidden Markov Model (HMM) which is able to access the control and the state of the vehicle. The driver performs maneuvers including stop/non-stop, change lane left/right and turn left/right in a simulator in both highway and urban environments. Moreover, the structure of the road (curved road) is also taken into account for classification. Experiments were conducted with different input sets (steering wheel data with and without vehicle state data) to compare the system performance.", "title": "" }, { "docid": "61801d62bfb0afe664a1cb374461f8ec", "text": "Methodical studies on Smart-shoe-based gait detection systems have become an influential constituent in decreasing elderly injuries due to fall. This paper proposes smartphone-based system for analyzing characteristics of gait by using a wireless Smart-shoe. The system employs four force sensitive resistors (FSR) to measure the pressure distribution underneath a foot. Data is collected via a Wi-Fi communication network between the Smart-shoe and smartphone for further processing in the phone. Experimentation and verification is conducted on 10 subjects with different gait including free gait. The sensor outputs, with gait analysis acquired from the experiment, are presented in this paper.", "title": "" }, { "docid": "aae5e70b47a7ec720333984e80725034", "text": "The authors realize a 50% length reduction of short-slot couplers in a post-wall dielectric substrate by two techniques. One is to introduce hollow rectangular holes near the side walls of the coupled region. The difference of phase constant between the TE10 and TE20 propagating modes increases and the required length to realize a desired dividing ratio is reduced. Another is to remove two reflection-suppressing posts in the coupled region. The length of the coupled region is determined to cancel the reflections at both ends of the coupled region. The total length of a 4-way Butler matrix can be reduced to 48% in comparison with the conventional one and the couplers still maintain good dividing characteristics; the dividing ratio of the hybrid is less than 0.1 dB and the isolations of the couplers are more than 20 dB. key words: short-slot coupler, length reduction, Butler matrix, post-wall waveguide, dielectric substrate, rectangular hole", "title": "" }, { "docid": "9cc8d5f395a11ceaabdf9b2e57aa2bc9", "text": "This paper proposes a Model Predictive Control methodology for a non-inverting Buck-Boost DC-DC converter for its efficient control. PID and MPC control strategies are simulated for the control of Buck-Boost converter and its performance is compared using MATLAB Simulink model. MPC shows better performance compared to PID controller. Output follows reference voltage more accurately showing that MPC can handle the dynamics of the system efficiently. The proposed methodology can be used for constant voltage applications. The control strategy can be implemented using a Field Programmable Gate Array (FPGA).", "title": "" }, { "docid": "fd81a8da4e684db7cf4a2d4f8b52e87e", "text": "Controlling fluids is still an open and challenging problem in fluid animation. In this paper we develop a novel fluid animation control approach and we present its application to controlling breaking waves. In our <i>Slice Method</i> framework an animator defines the shape of a breaking wave at a desired moment in its evolution based on a library of breaking waves. Our system computes then the subsequent dynamics with the aid of a 3D Navier-Stokes solver. The wave dynamics previous to the moment the animator exerts control can also be generated based on the wave library. The animator is thus enabled to obtain a full animation of a breaking wave while controlling the shape and the timing of the breaking. An additional advantage of the method is that it provides a significantly faster method for obtaining the full 3D breaking wave evolution compared to starting the simulation at an early stage and using solely the 3D Navier-Stokes equations. We present a series of 2D and 3D breaking wave animations to demonstrate the power of the method.", "title": "" }, { "docid": "055cb9aca6b16308793944154dc7866a", "text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?", "title": "" }, { "docid": "223c9e9bd6ad868eea2c936437abe2a7", "text": "ÐDetermining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers. Index TermsÐPose estimation, absolute orientation, optimization,weak-perspective camera models, numerical optimization.", "title": "" } ]
scidocsrr
84c4cda1303580dec64971a99ffcdd00
The Research Audit Trial – Enhancing Trustworthiness in Qualitative Inquiry
[ { "docid": "aeb19f8f9c6e5068fc602682e4ae04d3", "text": "Received: 29 November 2004 Revised: 26 July 2005 Accepted: 4 November 2005 Abstract Interpretive research in information systems (IS) is now a well-established part of the field. However, there is a need for more material on how to carry out such work from inception to publication. I published a paper a decade ago (Walsham, 1995) which addressed the nature of interpretive IS case studies and methods for doing such research. The current paper extends this earlier contribution, with a widened scope of all interpretive research in IS, and through further material on carrying out fieldwork, using theory and analysing data. In addition, new topics are discussed on constructing and justifying a research contribution, and on ethical issues and tensions in the conduct of interpretive work. The primary target audience for the paper is lessexperienced IS researchers, but I hope that the paper will also stimulate reflection for the more-experienced IS researcher and be of relevance to interpretive researchers in other social science fields. European Journal of Information Systems (2006) 15, 320–330. doi:10.1057/palgrave.ejis.3000589", "title": "" } ]
[ { "docid": "51a9180623be4ddaf514377074edc379", "text": "Breast region measurements are important for research, but they may also become significant in the legal field as a quantitative tool for preoperative and postoperative evaluation. Direct anthropometric measurements can be taken in clinical practice. The aim of this study was to compare direct breast anthropometric measurements taken with a tape measure and a compass. Forty women, aged 18–60 years, were evaluated. They had 14 anatomical landmarks marked on the breast region and arms. The union of these points formed eight linear segments and one angle for each side of the body. The volunteers were evaluated by direct anthropometry in a standardized way, using a tape measure and a compass. Differences were found between the tape measure and the compass measurements for all segments analyzed (p > 0.05). Measurements obtained by tape measure and compass are not identical. Therefore, once the measurement tool is chosen, it should be used for the pre- and postoperative measurements in a standardized way. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "e820d9e767766d460463805edf86c684", "text": "Software systems are often designed without considering their social intentionality and the software process changes required to accommodate them. With the rise of artificial intelligence and cognitive services-based systems, software can no longer be considered a passive participant in a domain. Structured and methodological approaches are required to study the intentions and motives of such software systems and their corresponding effect on the design of business and software processes that interact with these software systems. This paper considers chatbots as domain example for illustrating the complexities of designing such intentional and intelligent systems, and the resultant changes and reconfigurations in processes. A mechanism of associating process architecture models and actor models is presented. The modeling and analysis of two types of chatbots retrieval-based and generative are shown using both process architecture and actor models.", "title": "" }, { "docid": "e966d95bfbe0b44154a38e751adca533", "text": "In-memory data analytic systems that use vertical bit-parallel scan methods generally use encoding techniques. We observe that in such environments, there is an opportunity to turn skew in both the data and predicate distributions (usually a problem for query processing) into a benefit that can be leveraged to encode the column values. This paper proposes a padded encoding scheme to address this opportunity. The proposed scheme creates encodings that map common attribute values to codes that can easily be distinguished from other codes by only examining a few bits in the full code. Consequently, scans on columns stored using the padded encoding scheme can safely prune the computation without examining all the bits in the code, thereby reducing the memory bandwidth and CPU cycles that are consumed when evaluating scan queries. Our padded encoding method results in a fixed-length encoding, as fixed-length encodings are easier to manage. However, the proposed padded encoding may produce longer (fixed-length) codes than those produced by popular order-preserving encoding methods, such as dictionary-based encoding. This additional space overhead has the potential to negate the gains from early pruning of the scan computation. However, as we demonstrate empirically, the additional space overhead is generally small, and the padded encoding scheme provides significant performance improvements.", "title": "" }, { "docid": "59de0b9497c56fb7659c0f2d8fdfdf6b", "text": "Aspergillus species are among the most important filamentous fungi from the viewpoints of industry, pathogenesis, and mycotoxin production. Fungal cells are exposed to a variety of environmental stimuli, including changes in osmolality, temperature, and pH, which create stresses that primarily act on fungal cell walls. In addition, fungal cell walls are the first interactions with host cells in either human or plants. Thus, understanding cell wall structure and the mechanism of their biogenesis is important for the industrial, medical, and agricultural fields. Here, we provide a systematic review of fungal cell wall structure and recent findings regarding the cell wall integrity signaling pathways in aspergilli. This accumulated knowledge will be useful for understanding and improving the use of industrial aspergilli fermentation processes as well as treatments for some fungal infections.", "title": "" }, { "docid": "458998f860076116c3fe9dda4ff1b2e9", "text": "Latent Dirichlet Allocation (LDA) and its variants have bee n widely used to discover latent topics in textual documents. However, some of topics generated by L DA may be noisy with irrelevant words scattering across these topics. We name this kind of wo rds as topic-indiscriminate words, which tend to make topics more ambiguous and less interpreta ble by humans. In our work, we propose a new topic model named TWLDA, which assigns low weig hts to words with low topic discriminating power (ability). Our experimental results show that the proposed approach, which effectively reduces the number of topic-indiscriminate wo rds in discovered topics, improves the effectiveness of LDA.", "title": "" }, { "docid": "8813c7c18f0629680f537bdd0afcb1ba", "text": "A fault-tolerant (FT) control approach for four-wheel independently-driven (4WID) electric vehicles is presented. An adaptive control based passive fault-tolerant controller is designed to ensure the system stability when an in-wheel motor/motor driver fault happens. As an over-actuated system, it is challenging to isolate the faulty wheel and accurately estimate the control gain of the faulty in-wheel motor for 4WID electric vehicles. An active fault diagnosis approach is thus proposed to isolate and evaluate the fault. Based on the estimated control gain of the faulty in-wheel motor, the control efforts of all the four wheels are redistributed to relieve the torque demand on the faulty wheel. Simulations using a high-fidelity, CarSim, full-vehicle model show the effectiveness of the proposed in-wheel motor/motor driver fault diagnosis and fault-tolerant control approach.", "title": "" }, { "docid": "4fa2ca240a9d458933bf6f2474b2e008", "text": "Health Information Systems are becoming an important platform for healthcare services. In this context, Health Recommender Systems (HRS) are presented as complementary tools in decision making processes in healthcare services. Health Recommender Systems increase usability of technologies and reduce information overload in processes. In this paper, a literature review was conducted by following a review procedure. Major approaches in HRS were outlined and findings were discussed. The paper presented current developments in the market, challenges and opportunities regarding to HRS and emerging approaches. It is believed that this study is an illuminating start-up point for HRS literature review.", "title": "" }, { "docid": "f838806a316b4267e166e7215db12166", "text": "This paper presents a computationally efficient method for action recognition from depth video sequences. It employs the so called depth motion maps (DMMs) from three projection views (front, side and top) to capture motion cues and uses local binary patterns (LBPs) to gain a compact feature representation. Two types of fusion consisting of feature-level fusion and decision-level fusion are considered. In the feature-level fusion, LBP features from three DMMs are merged before classification while in the decision-level fusion, a soft decision-fusion rule is used to combine the classification outcomes. The introduced method is evaluated on two standard datasets and is also compared with the existing methods. The results indicate that it outperforms the existing methods and is able to process depth video sequences in real-time.", "title": "" }, { "docid": "7a300ee432682af17ff338fc7d2ff778", "text": "Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization.", "title": "" }, { "docid": "f3a838d6298c8ae127e548ba62e872eb", "text": "Plasmodium falciparum resistance to artemisinins, the most potent and fastest acting anti-malarials, threatens malaria elimination strategies. Artemisinin resistance is due to mutation of the PfK13 propeller domain and involves an unconventional mechanism based on a quiescence state leading to parasite recrudescence as soon as drug pressure is removed. The enhanced P. falciparum quiescence capacity of artemisinin-resistant parasites results from an increased ability to manage oxidative damage and an altered cell cycle gene regulation within a complex network involving the unfolded protein response, the PI3K/PI3P/AKT pathway, the PfPK4/eIF2α cascade and yet unidentified transcription factor(s), with minimal energetic requirements and fatty acid metabolism maintained in the mitochondrion and apicoplast. The detailed study of these mechanisms offers a way forward for identifying future intervention targets to fend off established artemisinin resistance.", "title": "" }, { "docid": "d2c36f67971c22595bc483ebb7345404", "text": "Resistive-switching random access memory (RRAM) devices utilizing a crossbar architecture represent a promising alternative for Flash replacement in high-density data storage applications. However, RRAM crossbar arrays require the adoption of diodelike select devices with high on-off -current ratio and with sufficient endurance. To avoid the use of select devices, one should develop passive arrays where the nonlinear characteristic of the RRAM device itself provides self-selection during read and write. This paper discusses the complementary switching (CS) in hafnium oxide RRAM, where the logic bit can be encoded in two high-resistance levels, thus being immune from leakage currents and related sneak-through effects in the crossbar array. The CS physical mechanism is described through simulation results by an ion-migration model for bipolar switching. Results from pulsed-regime characterization are shown, demonstrating that CS can be operated at least in the 10-ns time scale. The minimization of the reset current is finally discussed.", "title": "" }, { "docid": "c450da231d3c3ec8410fe621f4ced54a", "text": "Distant supervision is a widely applied approach to automatic training of relation extraction systems and has the advantage that it can generate large amounts of labelled data with minimal effort. However, this data may contain errors and consequently systems trained using distant supervision tend not to perform as well as those based on manually labelled data. This work proposes a novel method for detecting potential false negative training examples using a knowledge inference method. Results show that our approach improves the performance of relation extraction systems trained using distantly supervised data.", "title": "" }, { "docid": "8f916f7be3048ae2a367096f4f82207d", "text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.", "title": "" }, { "docid": "e5c4870acea1c7315cce0561f583626c", "text": "A discussion of CMOS readout technologies for infrared (IR) imaging systems is presented. First, the description of various types of IR detector materials and structures is given. The advances of detector fabrication technology and microelectronics process technology have led to the development of large format array of IR imaging detectors. For such large IR FPA’s which is the critical component of the advanced infrared imaging system, general requirement and specifications are described. To support a good interface between FPA and downstream signal processing stage, both conventional and recently developed CMOS readout techniques are presented and discussed. Finally, future development directions including the smart focal plane concept are also introduced.", "title": "" }, { "docid": "19a7753dc23ba979b2196c21d2ddf4a9", "text": "With the popularity of Internet applications and widespread use of mobile Internet, the Internet traffic maintains a rapid growth over the past decades. Internet traffic archival system (ITAS) for packets or flow records becomes more and more widely used in network monitor, network troubleshooting, user behavior and experience analysis etc. In this paper, we survey the design and implementation of several typical traffic archival systems. We analyze and compare the architectures and key technologies backing up Internet traffic archival system, and summarize the key technologies which include packet/flow capturing, packet/flow storage and bitmap index encoding algorithm, and dive into the packet/flow capturing technologies. Then, we propose the design and implementation of TiFaflow traffic archival system. Finally, we summarize and discuss the future direction of Internet traffic archival systems.", "title": "" }, { "docid": "70e6ce1ae00e6a6ed9af1f62f9764150", "text": "Citations play a pivotal role in indicating various aspects of scientific literature. Quantitative citation analysis approaches have been used over the decades to measure the impact factor of journals, to rank researchers or institutions, to discover evolving research topics etc. Researchers doubted the pure quantitative citation analysis approaches and argued that all citations are not equally important; citation reasons must be considered while counting. In the recent past, researchers have focused on identifying important citation reasons by classifying them into important and non-important classes rather than individually classifying each reason. Most of contemporary citation classification techniques either rely on full content of articles, or they are dominated by content based features. However, most of the time content is not freely available as various journal publishers do not provide open access to articles. This paper presents a binary citation classification scheme, which is dominated by metadata based parameters. The study demonstrates the significance of metadata and content based parameters in varying scenarios. The experiments are performed on two annotated data sets, which are evaluated by employing SVM, KLR, Random Forest machine learning classifiers. The results are compared with the contemporary study that has performed similar classification employing rich list of content-based features. The results of comparisons revealed that the proposed model has attained improved value of precision (i.e., 0.68) just by relying on freely available metadata. We claim that the proposed approach can serve as the best alternative in the scenarios wherein content in unavailable.", "title": "" }, { "docid": "e627c7ee8fd9a8a3ea8c7dc0a4fb91ce", "text": "The goal of a fall detection system is to automatically detect cases where a human falls and may have been injured. A natural application of such a system is in home monitoring of patients and elderly persons, so as to automatically alert relatives and/or authorities in case of an injury caused by a fall. This paper describes experiments with three computer vision methods for fall detection in a simulated home environment. The first method makes a decision based on a single frame, simply based on the vertical position of the image centroid of the person. The second method makes a threshold-based decision based on the last few frames, by considering the number of frames during which the person has been falling, the magnitude (in pixels) of the fall, and the maximum velocity of the fall. The third method is a statistical method that makes a decision based on the same features as the previous two methods, but using probabilistic models as opposed to thresholds for making the decision. Preliminary experimental results are promising, with the statistical method attaining relatively high accuracy in detecting falls while at the same time producing a relatively small number of false positives.", "title": "" }, { "docid": "7d0e59bee3b2a430ba0436f5df5621c0", "text": "The vertical dimension of interpersonal relations (relating to dominance, power, and status) was examined in association with nonverbal behaviors that included facial behavior, gaze, interpersonal distance, body movement, touch, vocal behaviors, posed encoding skill, and others. Results were separately summarized for people's beliefs (perceptions) about the relation of verticality to nonverbal behavior and for actual relations between verticality and nonverbal behavior. Beliefs/perceptions were stronger and much more prevalent than were actual verticality effects. Perceived and actual relations were positively correlated across behaviors. Heterogeneity was great, suggesting that verticality is not a psychologically uniform construct in regard to nonverbal behavior. Finally, comparison of the verticality effects to those that have been documented for gender in relation to nonverbal behavior revealed only a limited degree of parallelism.", "title": "" }, { "docid": "405022c5a2ca49973eaaeb1e1ca33c0f", "text": "BACKGROUND\nPreanalytical factors are the main source of variation in clinical chemistry testing and among the major determinants of preanalytical variability, sample hemolysis can exert a strong influence on result reliability. Hemolytic samples are a rather common and unfavorable occurrence in laboratory practice, as they are often considered unsuitable for routine testing due to biological and analytical interference. However, definitive indications on the analytical and clinical management of hemolyzed specimens are currently lacking. Therefore, the present investigation evaluated the influence of in vitro blood cell lysis on routine clinical chemistry testing.\n\n\nMETHODS\nNine aliquots, prepared by serial dilutions of homologous hemolyzed samples collected from 12 different subjects and containing a final concentration of serum hemoglobin ranging from 0 to 20.6 g/L, were tested for the most common clinical chemistry analytes. Lysis was achieved by subjecting whole blood to an overnight freeze-thaw cycle.\n\n\nRESULTS\nHemolysis interference appeared to be approximately linearly dependent on the final concentration of blood-cell lysate in the specimen. This generated a consistent trend towards overestimation of alanine aminotransferase (ALT), aspartate aminotransferase (AST), creatinine, creatine kinase (CK), iron, lactate dehydrogenase (LDH), lipase, magnesium, phosphorus, potassium and urea, whereas mean values of albumin, alkaline phosphatase (ALP), chloride, gamma-glutamyltransferase (GGT), glucose and sodium were substantially decreased. Clinically meaningful variations of AST, chloride, LDH, potassium and sodium were observed in specimens displaying mild or almost undetectable hemolysis by visual inspection (serum hemoglobin < 0.6 g/L). The rather heterogeneous and unpredictable response to hemolysis observed for several parameters prevented the adoption of reliable statistic corrective measures for results on the basis of the degree of hemolysis.\n\n\nCONCLUSION\nIf hemolysis and blood cell lysis result from an in vitro cause, we suggest that the most convenient corrective solution might be quantification of free hemoglobin, alerting the clinicians and sample recollection.", "title": "" }, { "docid": "bae5a6246cbdb2f2b3414bff562f4101", "text": "Fear of flying (FOF) affects an estimated 10-25% of the population. Patients with FOF (N = 49) were randomly assigned to virtual reality exposure (VRE) therapy, standard exposure (SE) therapy, or a wait-list (WL) control. Treatment consisted of 8 sessions over 6 weeks, with 4 sessions of anxiety management training followed by either exposure to a virtual airplane (VRE) or exposure to an actual airplane at the airport (SE). A posttreatment flight on a commercial airline measured participants' willingness to fly and anxiety during flight immediately after treatment. The results indicated that VRE and SE were both superior to WL, with no differences between VRE and SE. The gains observed in treatment were maintained at a 6-month follow up. By 6 months posttreatment, 93% of VRE participants and 93% of SE participants had flown. VRE therapy and SE therapy for treatment of FOF were unequivocally supported in this controlled study.", "title": "" } ]
scidocsrr
aab7ab1c6bc6a9960c3189e4f264999a
QuERy: A Framework for Integrating Entity Resolution with Query Processing
[ { "docid": "2eab78b8ec65340be1473086f31eb8c4", "text": "We present a new family of join algorithms, called ripple joins, for online processing of multi-table aggregation queries in a relational database management system (DBMS). Such queries arise naturally in interactive exploratory decision-support applications.\nTraditional offline join algorithms are designed to minimize the time to completion of the query. In contrast, ripple joins are designed to minimize the time until an acceptably precise estimate of the query result is available, as measured by the length of a confidence interval. Ripple joins are adaptive, adjusting their behavior during processing in accordance with the statistical properties of the data. Ripple joins also permit the user to dynamically trade off the two key performance factors of on-line aggregation: the time between successive updates of the running aggregate, and the amount by which the confidence-interval length decreases at each update. We show how ripple joins can be implemented in an existing DBMS using iterators, and we give an overview of the methods used to compute confidence intervals and to adaptively optimize the ripple join “aspect-ratio” parameters. In experiments with an initial implementation of our algorithms in the POSTGRES DBMS, the time required to produce reasonably precise online estimates was up to two orders of magnitude smaller than the time required for the best offline join algorithms to produce exact answers.", "title": "" }, { "docid": "d5f4b4bb8ac096ca32a93e823a527c5a", "text": "Entity matching is a crucial and difficult task for data integration. Entity matching frameworks provide several methods and their combination to effectively solve different match tasks. In this paper, we comparatively analyze 11 proposed frameworks for entity matching. Our study considers both frameworks which do or do not utilize training data to semiautomatically find an entity matching strategy to solve a given match task. Moreover, we consider support for blocking and the combination of different match algorithms. We further study how the different frameworks have been evaluated. The study aims at exploring the current state of the art in research prototypes of entity matching frameworks and their evaluations. The proposed criteria should be helpful to identify promising framework approaches and enable categorizing and comparatively assessing additional entity matching frameworks and their evaluations. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d95cd76008dd65d5d7f00c82bad013d3", "text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.", "title": "" } ]
[ { "docid": "f61baa50a84ac9558b4789f910dae09a", "text": "PURPOSE\nThe purpose of this study was to determine whether specific language impairment (SLI) and dyslexia are distinct developmental disorders.\n\n\nMETHOD\nStudy 1 investigated the overlap between SLI identified in kindergarten and dyslexia identified in 2nd, 4th, or 8th grades in a representative sample of 527 children. Study 2 examined phonological processing in a subsample of participants, including 21 children with dyslexia only, 43 children with SLI only, 18 children with SLI and dyslexia, and 165 children with typical language/reading development. Measures of phonological awareness and nonword repetition were considered.\n\n\nRESULTS\nStudy 1 showed limited but statistically significant overlap between SLI and dyslexia. Study 2 found that children with dyslexia or a combination of dyslexia and SLI performed significantly less well on measures of phonological processing than did children with SLI only and those with typical development. Children with SLI only showed only mild deficits in phonological processing compared with typical children.\n\n\nCONCLUSIONS\nThese results support the view that SLI and dyslexia are distinct but potentially comorbid developmental language disorders. A deficit in phonological processing is closely associated with dyslexia but not with SLI when it occurs in the absence of dyslexia.", "title": "" }, { "docid": "50beb6d7c0581bf842b47008d2d981f2", "text": "O paper shows that the parameters in existing theoretical models of channel substitution such as offline transportation cost, online disutility cost, and the prices of online and offline retailers interact to determine consumer choice of channels. In this way, our results provide empirical support for many such models. In particular, we empirically examine the trade-off between the benefits of buying online and the benefits of buying in a local retail store. How does a consumer’s physical location shape the relative benefits of buying from the online world? We explore this problem using data from Amazon.com on the top-selling books for 1,497 unique locations in the United States for 10 months ending in January 2006. We show that when a store opens locally, people substitute away from online purchasing, even controlling for product-specific preferences by location. These estimates are economically large, suggesting that the disutility costs of purchasing online are substantial and that offline transportation costs matter. We also show that offline entry decreases consumers’ sensitivity to online price discounts. However, we find no consistent evidence that the breadth of the product line at a local retail store affects purchases.", "title": "" }, { "docid": "c15492fea3db1af99bc8a04bdff71fdc", "text": "The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.", "title": "" }, { "docid": "cfa2c14c4a978ca3aef394e1d6a056aa", "text": "In this paper a microstrip periodic leaky-wave antenna optimized for radiation scanning through broadside is presented. This antenna is based on the excitation of a leaky mode that propagates along the structure, with radiation occurring from the -1 space harmonic. The specific geometry of the metalization inside the unit cell permits a continuous scanning from the backward to the forward quadrant without any degradation of the beam, avoiding the presence of open-stopband effects. This new design provides a valid alternative to metamaterial leaky-wave antennas already presented in the literature and demonstrates the possibility of obtaining a conventional microstrip periodic LWA operating in the -1 harmonic with a backfire-to-endfire frequency-scanning capability.", "title": "" }, { "docid": "cf1967eaa2fe97a3de2b99aec0df27cb", "text": "We present a high gain linearly polarized Ku-band planar array for mobile satellite TV reception. In contrast with previously presented three dimensional designs, the approach presented here results in a low profile planar array with a similar performance. The elevation scan is performed electronically, whereas the azimuth scan is done mechanically using an electric motor. The incident angle of the arriving satellite signal is generally large, varying between 25° to 65° depending on the location of the receiver, thereby creating a considerable off-axis scan loss. In order to alleviate this problem, and yet maintaining a planar design, the antenna array is designed to be consisting of subarrays with a fixed scanned beam at 45°. Therefore, the array of fixed-beam subarrays needs to be scanned ±20° around their peak beam, which results in a higher combined gain/directivity. The proposed antenna demonstrates the minimum measured gain of 23.1 dBi throughout the scan range (for 65° scan) with the peak gain of 26.5 dBi (for 32° scan) at 12 GHz while occupying a circular aperture of 26 cm in diameter.", "title": "" }, { "docid": "e622e57bfd984f00f5c1fb072f4079f7", "text": "A vast amount of scientific information is encoded in natural language text, and the quantity of such text has become so great that it is no longer economically feasible to have a human as the first step in the search process. Natural language processing and text mining tools have become essential to facilitate the search for and extraction of information from text. This has led to vigorous research efforts to create useful tools and to create humanly labeled text corpora, which can be used to improve such tools. To encourage combining these efforts into larger, more powerful and more capable systems, a common interchange format to represent, store and exchange the data in a simple manner between different language processing systems and text mining tools is highly desirable. Here we propose a simple extensible mark-up language format to share text documents and annotations. The proposed annotation approach allows a large number of different annotations to be represented including sentences, tokens, parts of speech, named entities such as genes or diseases and relationships between named entities. In addition, we provide simple code to hold this data, read it from and write it back to extensible mark-up language files and perform some sample processing. We also describe completed as well as ongoing work to apply the approach in several directions. Code and data are available at http://bioc.sourceforge.net/. Database URL: http://bioc.sourceforge.net/", "title": "" }, { "docid": "eb17078285e6f528d0cd08178e1e57c2", "text": "This paper proposes a smart queue management system for delivering real-time service request updates to clients' smartphones in the form of audio and visual feedback. The proposed system aims at reducing the dissatisfaction with services with medium to long waiting times. To this end, the system allows carriers of digital ticket to leave the waiting areas and return in time for their turn to receive service. The proposed system also improves the waiting experience of clients choosing to stay in the waiting area by connecting them to the audio signal of the often muted television sets running entertainment programs, advertisement of services, or news. The system is a web of things including connected units for registering and verifying tickets, units for capturing and streaming audio and queue management, and participating client units in the form of smartphone applications. We implemented the proposed system and verified its functionality and report on our findings and areas of improvements.", "title": "" }, { "docid": "e99326029b5fbe438f2e1365266b99c7", "text": "Log-Structured-Merge (LSM) Tree gains much attention recently because of its superior performance in write-intensive workloads. LSM Tree uses an append-only structure in memory to achieve low write latency; at memory capacity, in-memory data are flushed to other storage media (e.g. disk). Consequently, read access is slower comparing to write. These specific features of LSM, including no in-place update and asymmetric read/write performance raise unique challenges in index maintenance for LSM. The structural difference between LSM and B-Tree also prevents mature B-Tree based approaches from being directly applied. To address the issues of index maintenance for LSM, we propose Diff-Index to support a spectrum of index maintenance schemes to suit different objectives in index consistency and performance. The schemes consist of sync-full, sync-insert, async-simple and async-session. Experiments on our HBase implementation quantitatively demonstrate that Diff-Index offers various performance/consistency balance and satisfactory scalability while avoiding global coordination. Syncinsert and async-simple can reduce 60%-80% of the overall index update latency when compared to the baseline syncfull ; async-simple can achieve superior index update performance with an acceptable inconsistency. Diff-Index exploits LSM features such as versioning and the flush-compact process to achieve goals of concurrency control and failure ∗Work done while author was at IBM Almaden Research Center. †Work done while author was an intern at IBM T. J. Watson Research Center. (c) 2014, Copyright is with the authors. Published in Proc. EDBT on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-nd 4.0. recovery with low complexity and overhead. Diff-Index is included in IBM InfoSphere BigInsights, an IBM big data offering.", "title": "" }, { "docid": "b4dcc5c36c86f9b1fef32839d3a1484d", "text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.", "title": "" }, { "docid": "1566c80c4624533292c7442c61f3be15", "text": "Modern software often relies on the combination of several software modules that are developed independently. There are use cases where different software libraries from different programming languages are used, e.g., embedding DLL files in JAVA applications. Even more complex is the case when different programming paradigms are combined like within applications with database connections, for instance PHP and SQL. Such a diversification of programming languages and modules in just one software application is becoming more and more important, as this leads to a combination of the strengths of different programming paradigms. But not always, the developers are experts in the different programming languages or even in different programming paradigms. So, it is desirable to provide easy to use interfaces that enable the integration of programs from different programming languages and offer access to different programming paradigms. In this paper we introduce a connector architecture for two programming languages of different paradigms: JAVA as a representative of object oriented programming languages and PROLOG for logic programming. Our approach provides a fast, portable and easy to use communication layer between JAVA and PROLOG. The exchange of information is done via a textual term representation which can be used independently from a deployed PROLOG engine. The proposed connector architecture allows for Object Unification on the JAVA side. We provide an exemplary connector for JAVA and SWI-PROLOG, a well-known PROLOG implementation.", "title": "" }, { "docid": "0ff27e119ec045674b9111bb5a9e5d29", "text": "Description: This book provides an introduction to the complex field of ubiquitous computing Ubiquitous Computing (also commonly referred to as Pervasive Computing) describes the ways in which current technological models, based upon three base designs: smart (mobile, wireless, service) devices, smart environments (of embedded system devices) and smart interaction (between devices), relate to and support a computing vision for a greater range of computer devices, used in a greater range of (human, ICT and physical) environments and activities. The author details the rich potential of ubiquitous computing, the challenges involved in making it a reality, and the prerequisite technological infrastructure. Additionally, the book discusses the application and convergence of several current major and future computing trends.-Provides an introduction to the complex field of ubiquitous computing-Describes how current technology models based upon six different technology form factors which have varying degrees of mobility wireless connectivity and service volatility: tabs, pads, boards, dust, skins and clay, enable the vision of ubiquitous computing-Describes and explores how the three core designs (smart devices, environments and interaction) based upon current technology models can be applied to, and can evolve to, support a vision of ubiquitous computing and computing for the future-Covers the principles of the following current technology models, including mobile wireless networks, service-oriented computing, human computer interaction, artificial intelligence, context-awareness, autonomous systems, micro-electromechanical systems, sensors, embedded controllers and robots-Covers a range of interactions, between two or more UbiCom devices, between devices and people (HCI), between devices and the physical world.-Includes an accompanying website with PowerPoint slides, problems and solutions, exercises, bibliography and further reading Graduate students in computer science, electrical engineering and telecommunications courses will find this a fascinating and useful introduction to the subject. It will also be of interest to ICT professionals, software and network developers and others interested in future trends and models of computing and interaction over the next decades.", "title": "" }, { "docid": "3d5ff3fc4c1ea3d3c8a76e93e331dc57", "text": "Autonomous mobile robots navigating in changing and dynamic unstructured environments like the outdoor environments need to cope with large amounts of uncertainties that are inherent of natural environments. The traditional type-1 fuzzy logic controller (FLC) using precise type-1 fuzzy sets cannot fully handle such uncertainties. A type-2 FLC using type-2 fuzzy sets can handle such uncertainties to produce a better performance. In this paper, we present a novel reactive control architecture for autonomous mobile robots that is based on type-2 FLC to implement the basic navigation behaviors and the coordination between these behaviors to produce a type-2 hierarchical FLC. In our experiments, we implemented this type-2 architecture in different types of mobile robots navigating in indoor and outdoor unstructured and challenging environments. The type-2-based control system dealt with the uncertainties facing mobile robots in unstructured environments and resulted in a very good performance that outperformed the type-1-based control system while achieving a significant rule reduction compared to the type-1 system.", "title": "" }, { "docid": "4ac3c3fb712a1121e0990078010fe4b0", "text": "1.1 Introduction Relational data has two characteristics: first, statistical dependencies exist between the entities we wish to model, and second, each entity often has a rich set of features that can aid classification. For example, when classifying Web documents, the page's text provides much information about the class label, but hyperlinks define a relationship between pages that can improve classification [Taskar et al., 2002]. Graphical models are a natural formalism for exploiting the dependence structure among entities. Traditionally, graphical models have been used to represent the joint probability distribution p(y, x), where the variables y represent the attributes of the entities that we wish to predict, and the input variables x represent our observed knowledge about the entities. But modeling the joint distribution can lead to difficulties when using the rich local features that can occur in relational data, because it requires modeling the distribution p(x), which can include complex dependencies. Modeling these dependencies among inputs can lead to intractable models, but ignoring them can lead to reduced performance. A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification. This is the approach taken by conditional random fields [Lafferty et al., 2001]. A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure. Because the model is", "title": "" }, { "docid": "3e6022c053b65ba0517423f7a69448a2", "text": "Holography, a revolutionary 3D imaging technique, has been developed for storing and recovering the amplitude and phase of light scattered by objects. Later, single-beam computer-generated phase holography was proposed for restoring the wavefront from a given incidence. However, because the phase modulation depends on the light propagation inside the material, the thickness of phase holograms usually remains comparable to the wavelength. Here we experimentally demonstrate ultra-thin metasurface holograms that operate in the visible range whose thickness is only 30 nm (approximately 1/23 of the operational wavelength). To our knowledge, this is the thinnest hologram that can provide both amplitude and phase modulation in the visible wavelength range, which generates highresolution low-noise images. Using this technique, not only the phase, but potentially the amplitude of the incident wave can be efficiently controlled, expanding the route to new applications of ultra-thin and surface-confined photonic devices. DOI: 10.1038/ncomms3807", "title": "" }, { "docid": "c1382d8ec524fcc6984f3a45de26d0f2", "text": "In the real word, the environment is often dynamic instead of stable. Usually the underlying data of a problem changes with time, which enhances the difficulties when learning a model from data. In this paper, different methods capable to detect changes from high-speed time changing data streams are compared. These methods are appropriated to be embedded inside learning models, allowing the adaptation to a non-stationary problem. The experimental evaluation considers different types of concept drift and data streams with different properties. Assessing measures such as: false alarm rates, number of samples until a change is detected and miss detections rates, a comparison between the algorithms’ capability of consistent detection is given. The choice on the best detection algorithm relies on a trade-off between the rate of false alarms and miss detections and the delay time until detection.", "title": "" }, { "docid": "23b90259d48fe9792ee232aad4ca56be", "text": "a r t i c l e i n f o Plate tectonics is a self-organizing global system driven by the negative buoyancy of the thermal boundary layer resulting in subduction. Although the signature of plate tectonics is recognized with some confidence in the Phanerozoic geological record of the continents, evidence for plate tectonics becomes less certain further back in time. To improve our understanding of plate tectonics on the Earth in the Precambrian we have to combine knowledge derived from the geological record with results from well-constrained numerical modeling. In a series of experiments using a 2D petrological–thermomechanical numerical model of oceanic subduction we have systematically investigated the dependence of tectono-metamorphic and magmatic regimes at an active plate margin on upper-mantle temperature, crustal radiogenic heat production, degree of lithospheric weakening and other parameters. We have identified a first-order transition from a \" no-subduction \" tectonic regime through a \" pre-subduction \" tectonic regime to the modern style of subduction. The first transition is gradual and occurs at upper-mantle temperatures between 250 and 200 K above the present-day values, whereas the second transition is more abrupt and occurs at 175–160 K. The link between geological observations and model results suggests that the transition to the modern plate tectonic regime might have occurred during the Mesoarchean–Neoarchean time (ca. 3.2–2.5 Ga). In the case of the \" pre-subduction \" tectonic regime (upper-mantle temperature 175–250 K above the present) the plates are weakened by intense percolation of melts derived from the underlying hot melt-bearing sub-lithospheric mantle. In such cases, convergence does not produce self-sustaining one-sided subduction, but rather results in shallow underthrusting of the oceanic plate under the continental plate. Further increase in the upper-mantle temperature (N 250 K above the present) causes a transition to a \" no-subduction \" regime where horizontal movements of small deformable plate fragments are accommodated by internal strain and even shallow underthrusts do not form under the imposed convergence. Thus, based on the results of the numerical modeling, we suggest that the crucial parameter controlling the tectonic regime is the degree of lithospheric weakening induced by emplacement of sub-lithospheric melts into the lithosphere. A lower melt flux at upper-mantle temperatures b 175–160 K results in a lesser degree of melt-related weakening leading to stronger plates, which stabilizes modern style subduction even at high mantle temperatures.", "title": "" }, { "docid": "20bcf837048350386e091eb33ad130cc", "text": "We describe a design pattern for writing programs that traverse data structures built from rich mutually-recursive data types. Such programs often have a great deal of \"boilerplate\" code that simply walks the structure, hiding a small amount of \"real\" code that constitutes the reason for the traversal.Our technique allows most of this boilerplate to be written once and for all, or even generated mechanically, leaving the programmer free to concentrate on the important part of the algorithm. These generic programs are much more adaptive when faced with data structure evolution because they contain many fewer lines of type-specific code.Our approach is simple to understand, reasonably efficient, and it handles all the data types found in conventional functional programming languages. It makes essential use of rank-2 polymorphism, an extension found in some implementations of Haskell. Further it relies on a simple type-safe cast operator.", "title": "" }, { "docid": "d8f8931af18f3e0a6424916dfac717ee", "text": "Twitter data have brought new opportunities to know what happens in the world in real-time, and conduct studies on the human subjectivity on a diversity of issues and topics at large scale, which would not be feasible using traditional methods. However, as well as these data represent a valuable source, a vast amount of noise can be found in them. Because of the brevity of texts and the widespread use of mobile devices, non-standard word forms abound in tweets, which degrade the performance of Natural Language Processing tools. In this paper, a lexical normalization system of tweets written in Spanish is presented. The system suggests normalization candidates for out-of-vocabulary (OOV) words based on similarity of graphemes or phonemes. Using contextual information, the best correction candidate for a word is selected. Experimental results show that the system correctly detects OOV words and the most of cases suggests the proper corrections. Together with this, results indicate a room for improvement in the correction candidate selection. Compared with other methods, the overall performance of the system is above-average and competitive to different approaches in the literature.", "title": "" }, { "docid": "a072bd3d53462b143c3e46ff4743b5d5", "text": "Recognition of real-world entities is crucial for most NLP applications. Since its introduction some twenty years ago, named entity processing has undergone a significant evolution with, among others, the definition of new tasks (e.g. entity linking) and the emergence of new types of data (e.g. speech transcriptions, micro-blogging). These pose certainly new challenges which affect not only methods and algorithms but especially linguistic resources. Where do we stand with respect to named entity resources? This paper aims at providing a systematic overview of named entity resources, accounting for qualities such as multilingualism, dynamicity and interoperability, and to identify shortfalls in order to guide future developments.", "title": "" }, { "docid": "7bf8b7e4698bd0ef951879f68083fd7e", "text": "Brain injury induced by fluid percussion in rats caused a marked elevation in extracellular glutamate and aspartate adjacent to the trauma site. This increase in excitatory amino acids was related to the severity of the injury and was associated with a reduction in cellular bioenergetic state and intracellular free magnesium. Treatment with the noncompetitive N-methyl-D-aspartate (NMDA) antagonist dextrophan or the competitive antagonist 3-(2-carboxypiperazin-4-yl)propyl-1-phosphonic acid limited the resultant neurological dysfunction; dextrorphan treatment also improved the bioenergetic state after trauma and increased the intracellular free magnesium. Thus, excitatory amino acids contribute to delayed tissue damage after brain trauma; NMDA antagonists may be of benefit in treating acute head injury.", "title": "" } ]
scidocsrr
03852f4bd5ac1b8bc68a42ca817d084f
CPW-Fed Circular Fractal Slot Antenna Design for Dual-Band Applications
[ { "docid": "28625bffdddecacbf217aef469df68c8", "text": "An ultrawide-band coplanar waveguide (CPW) fed slot antenna is presented. A rectangular slot antenna is excited by a 50-/spl Omega/ CPW with a U-shaped tuning stub. The impedance bandwidth, from both measurement and simulation, is about 110% (S11<-10 dB). The antenna radiates bi-directionally. The radiation patterns obtained from simulations are found to be stable across the matching band and experimental verification is provided at the high end of the band.", "title": "" } ]
[ { "docid": "e507c60b8eb437cbd6ca9692f1bf8727", "text": "We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.", "title": "" }, { "docid": "5bdaaf3735bcd39dd66a8bea79105a95", "text": "Retailing, and particularly fashion retailing, is changing into a much more technology driven business model using omni-channel retailing approaches. Also analytical and data-driven marketing is on the rise. However, there has not been paid a lot of attention to the underlying and underpinning datastructures, the characteristics for fashion retailing, the relationship between static and dynamic data, and the governance of this. This paper is analysing and discussing the data dimension of fashion retailing with focus on data-model development, master data management and the impact of this on business development in the form of increased operational effectiveness, better adaptation the omni-channel environment and improved alignment between the business strategy and the supporting data. The paper presents a case study of a major European fashion retail and wholesale company that is in the process of reorganising its master data model and master data governance to remove silos of data, connect and utilise data across business processes, and design a global product master data database that integrates data for all existing and expected sales channels. As a major finding of this paper is fashion retailing needs more strict master data governance than general retailing as products are plenty, designed products are not necessarily marketed, and product life-cycles generally are short.", "title": "" }, { "docid": "3ac1ceb1656f4ede34e417d17df41b9e", "text": "We study the problem of link prediction in coupled networks, where we have the structure information of one (source) network and the interactions between this network and another (target) network. The goal is to predict the missing links in the target network. The problem is extremely challenging as we do not have any information of the target network. Moreover, the source and target networks are usually heterogeneous and have different types of nodes and links. How to utilize the structure information in the source network for predicting links in the target network? How to leverage the heterogeneous interactions between the two networks for the prediction task?\n We propose a unified framework, CoupledLP, to solve the problem. Given two coupled networks, we first leverage atomic propagation rules to automatically construct implicit links in the target network for addressing the challenge of target network incompleteness, and then propose a coupled factor graph model to incorporate the meta-paths extracted from the coupled part of the two networks for transferring heterogeneous knowledge. We evaluate the proposed framework on two different genres of datasets: disease-gene (DG) and mobile social networks. In the DG networks, we aim to use the disease network to predict the associations between genes. In the mobile networks, we aim to use the mobile communication network of one mobile operator to infer the network structure of its competitors. On both datasets, the proposed CoupledLP framework outperforms several alternative methods. The proposed problem of coupled link prediction and the corresponding framework demonstrate both the scientific and business applications in biology and social networks.", "title": "" }, { "docid": "a3d95604c143f1cd511fd62fe62bb4f4", "text": "We propose a new method for unconstrained optimization of a s mooth and strongly convex function, which attains the optimal rate of convergence of N esterov’s accelerated gradient descent. The new algorithm has a simple geometric interpret ation, loosely inspired by the ellipsoid method. We provide some numerical evidence that t he new method can be superior to Nesterov’s accelerated gradient descent.", "title": "" }, { "docid": "ea5bc45b903df5c1293bc437a437ac83", "text": "1045 We have all visited several stores to check prices and/or to find the right item or the right size. Similarly, it can take time and effort for a worker to find a suitable job with suitable pay, and for employers to receive and evaluate applications for job openings. Search theory explores the workings of markets once facts such as these are incorporated into the analysis. Adequate analysis of market frictions needs to consider how reactions to frictions change the overall economic environment: not only do frictions change incentives for buyers and sellers, but the responses to the changed incentives also alter the economic environment for all the participants in the market. Because of these feedback effects, seemingly small frictions can have large effects on outcomes. Equilibrium search theory is the development of basic models to permit analysis of economic outcomes when specific frictions are incorporated into simpler market models. The primary friction addressed by search theory is the need to spend time and effort to learn about opportunities—opportunities to buy or to sell, to hire or to be hired. There are many aspects of a job and of a worker that matter when deciding whether a particular match is worthwhile. Such frictions are naturally analyzed in models that consider a process over time—of workers seeking jobs, firms seeking employees, borrowers seeking lenders, and shoppers buying items that are not part of frequent shopping. Search theory models have altered the way we think about markets, how we interpret market data, and how we think about government policies. The complexity of the economy calls for the use of multiple models that address different aspects of the determinants of unemployment (and other) outcomes. This view was captured so well by Alfred Marshall (1890: 1948 edition, p. 366) that I have quoted this passage repeatedly since coming upon it while doing research for the Churchill Lectures (Diamond 1994b).", "title": "" }, { "docid": "11538da6cfda3a81a7ddec0891aae1d9", "text": "This work presents a dataset and annotation scheme for the new task of identifying “good” conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations. We develop a taxonomy to reflect features of entire threads and individual comments which we believe contribute to identifying ERICs; code a novel dataset of Yahoo News comment threads (2.4k threads and 10k comments) and 1k threads from the Internet Argument Corpus; and analyze the features characteristic of ERICs. This is one of the largest annotated corpora of online human dialogues, with the most detailed set of annotations. It will be valuable for identifying ERICs and other aspects of argumentation, dialogue, and discourse.", "title": "" }, { "docid": "6ddf62a60b0d56c76b54ca6cd0b28ab9", "text": "Improvement of vehicle safety performance is one of the targets of ITS development. A pre-crash safety system has been developed that utilizes ITS technologies. The Pre-crash Safety system reduces collision injury by estimating TTC(time-tocollision) to preemptively activate safety devices, which consist of “Pre-crash Seatbelt” system and “Pre-crash Brake Assist” system. The key technology of these systems is a “Pre-crash Sensor” to detect obstacles and estimate TTC. In this paper, the Pre-crash Sensor is presented. The Pre-crash Sensor uses millimeter-wave radar to detect preceding vehicles, oncoming vehicles, roadside objects, etc. on the road ahead. Furthermore, by using a phased array system as a vehicle radar for the first time, a compact electronically scanned millimeter-wave radar with high recognition performance has been achieved. With respect to the obstacle determination algorithm, a crash determination algorithm has been newly developed, taking into account estimation of the direction of advance of the vehicle, in addition to the distance, relative speed and direction of the object.", "title": "" }, { "docid": "382fd1b9fca8163718548522ce05c58d", "text": "Software development involves a number of interrelated factors which affect development effort and productivity. Since many of these relationships are not well understood, accurate estimation of so&are development time and effort is a dificult problem. Most estimation models in use or proposed in the literature are based on regression techniques. This paper examines the potential of two artijcial intelligence approaches i.e. artificial neural network and case-based reasoning for creating development effort estimation models. Artijcial neural network can provide accurate estimates when there are complex relationships between variables and where the input data is distorted by high noise levels Case-based reasoning solves problems by adapting solutions from old problems similar to the current problem. This research examines both the performance of back-propagation artificial neural networks in estimating software development effort and the potential of case-based reasoning for development estimation using the same dataset.", "title": "" }, { "docid": "91b96fd6754a97b69488632a4d1d602e", "text": "Face Super-Resolution (SR) is a domain-specific superresolution problem. The facial prior knowledge can be leveraged to better super-resolve face images. We present a novel deep end-to-end trainable Face Super-Resolution Network (FSRNet), which makes use of the geometry prior, i.e., facial landmark heatmaps and parsing maps, to super-resolve very low-resolution (LR) face images without well-aligned requirement. Specifically, we first construct a coarse SR network to recover a coarse high-resolution (HR) image. Then, the coarse HR image is sent to two branches: a fine SR encoder and a prior information estimation network, which extracts the image features, and estimates landmark heatmaps/parsing maps respectively. Both image features and prior information are sent to a fine SR decoder to recover the HR image. To generate realistic faces, we also propose the Face Super-Resolution Generative Adversarial Network (FSRGAN) to incorporate the adversarial loss into FSRNet. Further, we introduce two related tasks, face alignment and parsing, as the new evaluation metrics for face SR, which address the inconsistency of classic metrics w.r.t. visual perception. Extensive experiments show that FSRNet and FSRGAN significantly outperforms state of the arts for very LR face SR, both quantitatively and qualitatively.", "title": "" }, { "docid": "e35994d3f2cb82666115a001dbd002d0", "text": "Filtering relevant documents with respect to entities is an essential task in the context of knowledge base construction and maintenance. It entails processing a time-ordered stream of documents that might be relevant to an entity in order to select only those that contain vital information. State-of-the-art approaches to document filtering for popular entities are entity-dependent: they rely on and are also trained on the specifics of differentiating features for each specific entity. Moreover, these approaches tend to use so-called extrinsic information such as Wikipedia page views and related entities which is typically only available only for popular head entities. Entity-dependent approaches based on such signals are therefore ill-suited as filtering methods for long-tail entities. In this paper we propose a document filtering method for long-tail entities that is entity-independent and thus also generalizes to unseen or rarely seen entities. It is based on intrinsic features, i.e., features that are derived from the documents in which the entities are mentioned. We propose a set of features that capture informativeness, entity-saliency, and timeliness. In particular, we introduce features based on entity aspect similarities, relation patterns, and temporal expressions and combine these with standard features for document filtering. Experiments following the TREC KBA 2014 setup on a publicly available dataset show that our model is able to improve the filtering performance for long-tail entities over several baselines. Results of applying the model to unseen entities are promising, indicating that the model is able to learn the general characteristics of a vital document. The overall performance across all entities---i.e., not just long-tail entities---improves upon the state-of-the-art without depending on any entity-specific training data.", "title": "" }, { "docid": "f825dbbc9ff17178a81be71c5b9312ae", "text": "Skills like computational thinking, problem solving, handling complexity, team-work and project management are essential for future careers and needs to be taught to students at the elementary level itself. Computer programming knowledge and skills, experiencing technology and conducting science and engineering experiments are also important for students at elementary level. However, teaching such skills effectively through active learning can be challenging for educators. In this paper, we present our approach and experiences in teaching such skills to several elementary level children using Lego Mindstorms EV3 robotics education kit. We describe our learning environment consisting of lessons, worksheets, hands-on activities and assessment. We taught students how to design, construct and program robots using components such as motors, sensors, wheels, axles, beams, connectors and gears. Students also gained knowledge on basic programming constructs such as control flow, loops, branches and conditions using a visual programming environment. We carefully observed how students performed various tasks and solved problems. We present experimental results which demonstrates that our teaching methodology consisting of both the course content and pedagogy was effective in imparting the desired skills and knowledge to elementary level children. The students also participated in a competitive World Robot Olympiad India event and qualified during the regional round which is an evidence of the effectiveness of the approach.", "title": "" }, { "docid": "b62d14843bfec4af197f374f83a88c96", "text": "In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96% grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB visionbased perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.4", "title": "" }, { "docid": "26827d9a84d4866438d69813dd3741b1", "text": "In this study, we present an evaluation of using various methods for face recognition. As feature extracting techniques we benefit from wavelet decomposition and Eigenfaces method which is based on Principal Component Analysis (PCA). After generating feature vectors, distance classifier and Support Vector Machines (SVMs) are used for classification step. We examined the classification accuracy according to increasing dimension of training set, chosen feature extractor–classifier pairs and chosen kernel function for SVM classifier. As test set we used ORL face database which is known as a standard face database for face recognition applications including 400 images of 40 people. At the end of the overall separation task, we obtained the classification accuracy 98.1% with Wavelet–SVM approach for 240 image training set. As a special study of pattern recognition, face recognition has had crucial effects in daily life especially for security purposes. Face recognition task is actively being used at airports, employee entries , criminal detection systems, etc. For this task many methods have been proposed and tested. Most of these methods have trade off's like hardware requirements, time to update image database, time for feature extraction, response time. Generally face recognition methods are composed of a feature extractor (like PCA, Wavelet decomposer) to reduce the size of input and a classifier like Neural Networks, Support Vector Machines, Nearest Distance Classifiers to find the features which are most likely to be looked for. In this study, we chose wavelet decomposition and Eigenfaces method which is based on Principal Component Analysis (PCA) as main techniques for data reduction and feature extraction. PCA is an efficient and long term studied method to extract feature sets by creating a feature space. PCA also has low computation time which is an important advantage. On the other hand because of being a linear feature extraction method, PCA is inefficient especially when nonlinearities are present in the underlying relationships (Kursun & Favorov, 2004). Wavelet decomposition is a multilevel dimension reduction process that makes time–space–frequency analysis. Unlike Fourier transform, which provides only frequency analysis of signals, wavelet transforms provide time–frequency analysis, which is particularly useful for pattern recognition (Gorgel, Sertbas, Kilic, Ucan, & Osman, 2009). In this study, we used available 40 classes in the ORL face recognition dataset (ORL Database of Faces, 1994). Eigenfaces and Discrete Wavelet Transform are used for feature extractor. For the classification step, we consider Support Vector Machines (SVM) and nearest distance classification …", "title": "" }, { "docid": "d1622f3a2cf81758fa2084506dcd65f2", "text": "Students who enrol in the undergraduate program on informatics at the Hellenic Open University (HOU) demonstrate significant difficulties in advancing beyond the introductory courses. We have embarked in an effort to analyse their academic performance throughout the academic year, as measured by the homework assignments, and attempt to derive short rules that explain and predict success or failure in the final exams. In this paper we review previous approaches, compare them with genetic algorithm based induction of decision trees and argue why our approach has a potential for developing into an alert tool.", "title": "" }, { "docid": "af420d60e9aafb9aa39da5381a681b76", "text": "In this paper, a novel planar Marchand balun using a patterned ground plane is presented. In the new design, with a slot under the coupled lines cut on the ground plane, the even-mode impedance can be increased substantially. Meanwhile, we propose that two additional separated rectangular conductors are placed under the coupled lines to act as two capacitors so that the odd-mode impedance is decreased. Design theory and procedure are presented to optimize the Marchand balun. As an example, one Marchand balun on a double-sided PCB is designed, simulated, fabricated and measured. The measured return loss is found to be better than – 10 dB over the frequency band from 1.2 GHz to 3.3 GHz, or around 100% bandwidth. The measured amplitude and phase imbalance between the two balanced output ports are within 1 dB and 4, respectively, over the operating frequency band. Index Terms — Baluns, coupled lines, wideband", "title": "" }, { "docid": "c74e3880a4bd7fe69f0c690fa4e4fdc4", "text": "This paper presents a parallel real time framework for emotions and mental states extraction and recognition from video fragments of human movements. In the experimental setup human hands are tracked by evaluation of moving skin-colored objects. The tracking analysis demonstrates that acceleration and frequency characteristics of the traced objects are relevant for classification of the emotional expressiveness of human movements. The outcomes of the emotional and mental states recognition are cross-validatedwith the analysis of two independent certifiedmovement analysts (CMA’s) who use the Laban movement analysis (LMA) method. We argue that LMA based computer analysis can serve as a common language for expressing and interpreting emotional movements between robots and humans, and in that way it resembles the common coding principle between action and perception by humans and primates that is embodied by themirror neuron system. The solution is part of a larger project on interaction between a human and a humanoid robot with the aim of training social behavioral skills to autistic children with robots acting in a natural environment. © 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fd27a21d2eaf5fc5b37d4cba6bd4dbef", "text": "RICHARD M. FELDER and JONI SPURLIN North Carolina State University, Raleigh, North Carolina 27695±7905, USA. E-mail: rmfelder@mindspring.com The Index of Learning Styles (ILS) is an instrument designed to assess preferences on the four dimensions of the Felder-Silverman learning style model. The Web-based version of the ILS is taken hundreds of thousands of times per year and has been used in a number of published studies, some of which include data reflecting on the reliability and validity of the instrument. This paper seeks to provide the first comprehensive examination of the ILS, including answers to several questions: (1) What are the dimensions and underlying assumptions of the model upon which the ILS is based? (2) How should the ILS be used and what misuses should be avoided? (3) What research studies have been conducted using the ILS and what conclusions regarding its reliability and validity may be inferred from the data?", "title": "" }, { "docid": "811edf1cfc3a36c6a2e136b2d25f5027", "text": "Success for many businesses depends on their information software systems. Keeping these systems operational is critical, as failure in these systems is costly. Such systems are in many cases sophisticated, distributed and dynamically composed. To ensure high availability and correct operation, it is essential that failures be detected promptly, their causes diagnosed and remedial actions taken. Although automated recovery approaches exists for specific problem domains, the problem-resolution process is in many cases manual and painstaking. Computer support personnel put a great deal of effort into resolving the reported failures. The growing size and complexity of these systems creates the need to automate this process. The primary focus of our research is on automated fault diagnosis and recovery using discrete monitoring data such as log files and notifications. Our goal is to quickly pinpoint the root-cause of a failure. Our contributions are: • Modelling discrete monitoring data for automated analysis, • Automatically leveraging common symptoms of failures from historic monitoring data using such models to pinpoint faults, and • Providing a model for decision-making under uncertainty such that appropriate recovery actions are chosen. Failures in such systems are caused by software defects, human error, hardware failures, environmental conditions and malicious behaviour. Our primary focus in this thesis is on software defects and misconfiguration.", "title": "" }, { "docid": "b6a5cb59faea3e32d0046c0809ff715b", "text": "This paper discusses a novel fast approach for moving object detection in H.264/AVC compressed domain for video surveillance applications. The proposed algorithm initially segments out edges from regions with motion at macroblock level by utilizing the gradient of quantization parameter over 2D-image space. A spatial median filtering of the segmented edges followed by weighted temporal accumulation accounts for whole object segmentation. To attain sub-macroblock (4×4) level precision, the size of macroblocks (in bits) is interpolated using a two tap filter. Partial decoding rules out the complexity involved in full decoding and gives fast foreground segmentation results. Compared to other compressed domain techniques, the proposed approach allows the video streams to be encoded with different quantization parameters across macroblocks thereby increasing flexibility in bit rate adjustment.", "title": "" }, { "docid": "b6637d1367e47a550f5a5b29b7224be9", "text": "Ovarian cancer is the most lethal gynecologic malignancy among women worldwide and is presumed to result from the presence of ovarian cancer stem cells. To overcome the limitation of current anticancer agents, another anticancer strategy is necessary to effectively target cancer stem cells in ovarian cancer. In many types of malignancies, including ovarian cancer, metformin, one of the most popular antidiabetic drugs, has been demonstrated to exhibit chemopreventive and anticancer efficacy with respect to incidence and overall survival rates. Thus, the metabolic reprogramming of cancer and cancer stem cells driven by genetic alterations during carcinogenesis and cancer progression could be therapeutically targeted. In this review, the potential efficacy and anticancer mechanisms of metformin against ovarian cancer stem cells will be discussed.", "title": "" } ]
scidocsrr
b3a0d6088d4b54eb041818457d25b6bb
Development of Employee Attendance and Payroll System using Fingerprint Biometrics
[ { "docid": "e125bd3935aace0b17f8ed4e431add63", "text": "Institutions, companies and organisations where security and net productivity is vital, access to certain areas must be controlled and monitored through an automated system of attendance. Managing people is a difficult task for most of the organizations and maintaining the attendance record is an important factor in people management. When considering the academic institute, taking the attendance of non-academic staff on daily basis and maintaining the records is a major task. Manually taking attendance and maintaining it for a long time adds to the difficulty of this task as well as wastes a lot of time. For this reason, an efficient system is proposed in this paper to solve the problem of manual attendance. This system takes attendance electronically with the help of a fingerprint recognition system, and all the records are saved for subsequent operations. Staff biometric attendance system employs an automated system to calculate attendance of staff in an organization and do further calculations of monthly attendance summary in order to reduce human errors during calculations. In essence, the proposed system can be employed in curbing the problems of lateness, buddy punching and truancy in any institution, organization or establishment. The proposed system will also improve the productivity of any organization if properly implemented.", "title": "" } ]
[ { "docid": "aa2b1a8d0cf511d5862f56b47d19bc6a", "text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:", "title": "" }, { "docid": "2ce2d44c6c19ad683989bbf8b117f778", "text": "Modern computer systems feature multiple homogeneous or heterogeneous computing units with deep memory hierarchies, and expect a high degree of thread-level parallelism from the software. Exploitation of data locality is critical to achieving scalable parallelism, but adds a significant dimension of complexity to performance optimization of parallel programs. This is especially true for programming models where locality is implicit and opaque to programmers. In this paper, we introduce the hierarchical place tree (HPT) model as a portable abstraction for task parallelism and data movement. The HPT model supports co-allocation of data and computation at multiple levels of a memory hierarchy. It can be viewed as a generalization of concepts from the Sequoia and X10 programming models, resulting in capabilities that are not supported by either. Compared to Sequoia, HPT supports three kinds of data movement in a memory hierarchy rather than just explicit data transfer between adjacent levels, as well as dynamic task scheduling rather than static task assignment. Compared to X10, HPT provides a hierarchical notion of places for both computation and data mapping. We describe our work-in-progress on implementing the HPT model in the Habanero-Java (HJ) compiler and runtime system. Preliminary results on general-purpose multicore processors and GPU accelerators indicate that the HPT model can be a promising portable abstraction for future multicore processors.", "title": "" }, { "docid": "1cf07400a152ea6bfac75c75bfb1eb7b", "text": "Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.", "title": "" }, { "docid": "d038c7b29701654f8ee908aad395fe8c", "text": "Vaginal fibroepithelial polyp is a rare lesion, and although benign, it can be confused with malignant connective tissue lesions. Treatment is simple excision, and recurrence is extremely uncommon. We report a case of a newborn with vaginal fibroepithelial polyp. The authors suggest that vaginal polyp must be considered in the evaluation of interlabial masses in prepubertal girls.", "title": "" }, { "docid": "f50342dfacd198dc094ef96415de4899", "text": "While the ubiquity and importance of nonliteral language are clear, people’s ability to use and understand it remains a mystery. Metaphor in particular has been studied extensively across many disciplines in cognitive science. One approach focuses on the pragmatic principles that listeners utilize to infer meaning from metaphorical utterances. While this approach has generated a number of insights about how people understand metaphor, to our knowledge there is no formal model showing that effects in metaphor understanding can arise from basic principles of communication. Building upon recent advances in formal models of pragmatics, we describe a computational model that uses pragmatic reasoning to interpret metaphorical utterances. We conduct behavioral experiments to evaluate the model’s performance and show that our model produces metaphorical interpretations that closely fit behavioral data. We discuss implications of the model for metaphor understanding, principles of communication, and formal models of language understanding.", "title": "" }, { "docid": "ff774eb7c90d4efadb190155ff606013", "text": "Communities are vehicles for efficiently disseminating news, rumors, and opinions in human social networks. Modeling information diffusion through a network can enable us to reach a superior functional understanding of the effect of network structures such as communities on information propagation. The intrinsic assumption is that form follows function---rational actors exercise social choice mechanisms to join communities that best serve their information needs. Particle Swarm Optimization (PSO) was originally designed to simulate aggregate social behavior; our proposed diffusion model, PSODM (Particle Swarm Optimization Diffusion Model) models information flow in a network by creating particle swarms for local network neighborhoods that optimize a continuous version of Holland's hyperplane-defined objective functions. In this paper, we show how our approach differs from prior modeling work in the area and demonstrate that it outperforms existing model-based community detection methods on several social network datasets.", "title": "" }, { "docid": "481f4a4b14d4594d8b023f9df074dfeb", "text": "We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.", "title": "" }, { "docid": "5273aa29ea18e8b1464b918625e6ccd8", "text": "This paper presents the results of the WMT17 shared tasks, which included three machine translation (MT) tasks (news, biomedical, and multimodal), two evaluation tasks (metrics and run-time estimation of MT quality), an automatic post-editing task, a neural MT training task, and a bandit learning task.", "title": "" }, { "docid": "e0f6878845e02e966908311e6818dbe9", "text": "Smart Home is one of emerging application domains of The Internet of things which following the computer and Internet. Although home automation technologies have been commercially available already, they are basically designed for signal-family smart homes with a high cost, and along with the constant growth of digital appliances in smart home, we merge smart home into smart-home-oriented Cloud to release the stress on the smart home system which mostly installs application software on their local computers. In this paper, we present a framework for Cloud-based smart home for enabling home automation, household mobility and interconnection which easy extensible and fit for future demands. Through subscribing services of the Cloud, smart home consumers can easily enjoy smart home services without purchasing computers which owns strong power and huge storage. We focus on the overall Smart Home framework, the features and architecture of the components of Smart Home, the interaction and cooperation between them in detail.", "title": "" }, { "docid": "53e7e1053129702b7fc32b32d11656da", "text": "A new and robust constant false alarm rate (CFAR) detector based on truncated statistics (TSs) is proposed for ship detection in single-look intensity and multilook intensity synthetic aperture radar data. The approach is aimed at high-target-density situations such as busy shipping lines and crowded harbors, where the background statistics are estimated from potentially contaminated sea clutter samples. The CFAR detector uses truncation to exclude possible statistically interfering outliers and TSs to model the remaining background samples. The derived truncated statistic CFAR (TS-CFAR) algorithm does not require prior knowledge of the interfering targets. The TS-CFAR detector provides accurate background clutter modeling, a stable false alarm regulation property, and improved detection performance in high-target-density situations.", "title": "" }, { "docid": "0bf5a87d971ff2dca4c8dfa176316663", "text": "A crucial privacy-driven issue nowadays is re-identifying anonymized social networks by mapping them to correlated cross-domain auxiliary networks. Prior works are typically based on modeling social networks as random graphs representing users and their relations, and subsequently quantify the quality of mappings through cost functions that are proposed without sufficient rationale. Also, it remains unknown how to algorithmically meet the demand of such quantifications, i.e., to find the minimizer of the cost functions. We address those concerns in a more realistic social network modeling parameterized by community structures that can be leveraged as side information for de-anonymization. By Maximum A Posteriori (MAP) estimation, our first contribution is new and well justified cost functions, which, when minimized, enjoy superiority to previous ones in finding the correct mapping with the highest probability. The feasibility of the cost functions is then for the first time algorithmically characterized. While proving the general multiplicative inapproximability, we are able to propose two algorithms, which, respectively, enjoy an -additive approximation and a conditional optimality in carrying out successful user re-identification. Our theoretical findings are empirically validated, with a notable dataset extracted from rare true cross-domain networks that reproduce genuine social network de-anonymization. Both theoretical and empirical observations also manifest the importance of community information in enhancing privacy inferencing.", "title": "" }, { "docid": "8087288ed5fe59292db81d30c885c4ba", "text": "We present anew cluster scheduler,Graphene, aimed at jobs that have a complex dependency structure and heterogeneous resource demands. Relaxing either of these challenges, i.e., scheduling a DAG of homogeneous tasks or an independent set of heterogeneous tasks, leads to NP-hard problems. Reasonable heuristics exist for these simpler problems, but they perform poorly when scheduling heterogeneous DAGs. Our key insights are: (1) focus on the long-running tasks and thosewith toughto-pack resource demands, (2) compute a DAG schedule, oøine, by ûrst scheduling such troublesome tasks and then scheduling the remaining tasks without violating dependencies. _ese oøine schedules are distilled to a simple precedence order and are enforced by an online component that scales to many jobs. _e online component also uses heuristics to compactly pack tasks and to trade-oò fairness for faster job completion. Evaluation on a 200-server cluster and using traces of productionDAGs at Microso , shows that Graphene improves median job completion time by 25% and cluster throughput by 30%.", "title": "" }, { "docid": "ad33994b26dad74e6983c860c0986504", "text": "Accurate software effort estimation has been a challenge for many software practitioners and project managers. Underestimation leads to disruption in the project's estimated cost and delivery. On the other hand, overestimation causes outbidding and financial losses in business. Many software estimation models exist; however, none have been proven to be the best in all situations. In this paper, a decision tree forest (DTF) model is compared to a traditional decision tree (DT) model, as well as a multiple linear regression model (MLR). The evaluation was conducted using ISBSG and Desharnais industrial datasets. Results show that the DTF model is competitive and can be used as an alternative in software effort prediction.", "title": "" }, { "docid": "b91204ac8a118fcde9a774e925f24a7e", "text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.", "title": "" }, { "docid": "7f69fbcda9d6ee11d5cc1591a88b6403", "text": "Voice conversion is defined as modifying the speech signal of one speaker (source speaker) so that it sounds as if it had been pronounced by a different speaker (target speaker). This paper describes a system for efficient voice conversion. A novel mapping function is presented which associates the acoustic space of the source speaker with the acoustic space of the target speaker. The proposed system is based on the use of a Gaussian Mixture Model, GMM, to model the acoustic space of a speaker and a pitch synchronous harmonic plus noise representation of the speech signal for prosodic modifications. The mapping function is a continuous parametric function which takes into account the probab ilistic classification provided by the mixture model (GMM). Evaluation by objective tests showed that the proposed system was able to reduce the perceptual distance between the source and target speaker by 70%. Formal listening tests also showed that 97% of the converted speech was judged to be spoken from the target speaker while maintaining high speech qua lity.", "title": "" }, { "docid": "935d22c1fdddaab40d8c94384f08fab2", "text": "Face biometrics is widely used in various applications including border control and facilitating the verification of travellers' identity claim with respect to his electronic passport (ePass). As in most countries, passports are issued to a citizen based on the submitted photo which allows the applicant to provide a morphed face photo to conceal his identity during the application process. In this work, we propose a novel approach leveraging the transferable features from a pre-trained Deep Convolutional Neural Networks (D-CNN) to detect both digital and print-scanned morphed face image. Thus, the proposed approach is based on the feature level fusion of the first fully connected layers of two D-CNN (VGG19 and AlexNet) that are specifically fine-tuned using the morphed face image database. The proposed method is extensively evaluated on the newly constructed database with both digital and print-scanned morphed face images corresponding to bona fide and morphed data reflecting a real-life scenario. The obtained results consistently demonstrate improved detection performance of the proposed scheme over previously proposed methods on both the digital and the print-scanned morphed face image database.", "title": "" }, { "docid": "7bd5d9a477d563ffe5782241ddc4c5cd", "text": "Research on code reviews has often focused on defect counts instead of defect types, which offers an imperfect view of code review benefits. In this paper, we classified the defects of nine industrial (C/C++) and 23 student (Java) code reviews, detecting 388 and 371 defects, respectively. First, we discovered that 75 percent of defects found during the review do not affect the visible functionality of the software. Instead, these defects improved software evolvability by making it easier to understand and modify. Second, we created a defect classification consisting of functional and evolvability defects. The evolvability defect classification is based on the defect types found in this study, but, for the functional defects, we studied and compared existing functional defect classifications. The classification can be useful for assigning code review roles, creating checklists, assessing software evolvability, and building software engineering tools. We conclude that, in addition to functional defects, code reviews find many evolvability defects and, thus, offer additional benefits over execution-based quality assurance methods that cannot detect evolvability defects. We suggest that code reviews may be most valuable for software products with long life cycles as the value of discovering evolvability defects in them is greater than for short life cycle systems.", "title": "" }, { "docid": "140815c8ccd62d0169fa294f6c4994b8", "text": "Six specific personality traits – playfulness, chase-proneness, curiosity/fearlessness, sociability, aggressiveness, and distance-playfulness – and a broad boldness dimension have been suggested for dogs in previous studies based on data collected in a standardized behavioural test (‘‘dog mentality assessment’’, DMA). In the present study I investigated the validity of the specific traits for predicting typical behaviour in everyday life. A questionnaire with items describing the dog’s typical behaviour in a range of situations was sent to owners of dogs that had carried out the DMA behavioural test 1–2 years earlier. Of the questionnaires that were sent out 697 were returned, corresponding to a response rate of 73.3%. Based on factor analyses on the questionnaire data, behavioural factors in everyday life were suggested to correspond to the specific personality traits from the DMA. Correlation analyses suggested construct validity for the traits playfulness, curiosity/ fearlessness, sociability, and distance-playfulness. Chase-proneness, which I expected to be related to predatory behaviour in everyday life, was instead related to human-directed play interest and nonsocial fear. Aggressiveness was the only trait from the DMA with low association to all of the behavioural factors from the questionnaire. The results suggest that three components of dog personality are measured in the DMA: (1) interest in playing with humans; (2) attitude towards strangers (interest in, fear of, and aggression towards); and (3) non-social fearfulness. These three components correspond to the traits playfulness, sociability, and curiosity/fearlessness, respectively, all of which were found to be related to a higher-order shyness–boldness dimension. www.elsevier.com/locate/applanim Applied Animal Behaviour Science 91 (2005) 103–128 * Present address: Department of Anatomy and Physiology, Faculty of Veterinary Medicine and Animal Science, Swedish University of Agricultural Sciences, Box 7011, SE-750 07 Uppsala, Sweden. Tel.: +46 18 67 28 21; fax: +46 18 67 21 11. E-mail address: kenth.svartberg@afys.slu.se. 0168-1591/$ – see front matter # 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.applanim.2004.08.030 Chase-proneness and distance-playfulness seem to be mixed measures of these personality components, and are not related to any additional components. Since the time between the behavioural test and the questionnaire was 1–2 years, the results indicate long-term consistency of the personality components. Based on these results, the DMA seems to be useful in predicting behavioural problems that are related to social and non-social fear, but not in predicting other potential behavioural problems. However, considering this limitation, the test seems to validly assess important aspects of dog personality, which supports the use of the test as an instrument in dog breeding and in selection of individual dogs for different purposes. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2e42ab12b43022d22b9459cfaea6f436", "text": "Treemaps provide an interesting solution for representing hierarchical data. However, most studies have mainly focused on layout algorithms and paid limited attention to the interaction with treemaps. This makes it difficult to explore large data sets and to get access to details, especially to those related to the leaves of the trees. We propose the notion of zoomable treemaps (ZTMs), an hybridization between treemaps and zoomable user interfaces that facilitates the navigation in large hierarchical data sets. By providing a consistent set of interaction techniques, ZTMs make it possible for users to browse through very large data sets (e.g., 700,000 nodes dispatched amongst 13 levels). These techniques use the structure of the displayed data to guide the interaction and provide a way to improve interactive navigation in treemaps.", "title": "" }, { "docid": "92d04ad5a9fa32c2ad91003213b1b86d", "text": "You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to...", "title": "" } ]
scidocsrr
8bb96df5a72cd12464251d87df0936ab
Designing vehicle tracking system - an open source approach
[ { "docid": "f5519eff0c13e0ee42245fdf2627b8ae", "text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.", "title": "" } ]
[ { "docid": "b811c82ff944715edc2b7dec382cb529", "text": "The mobile industry has experienced a dramatic growth; it evolves from analog to digital 2G (GSM), then to high date rate cellular wireless communication such as 3G (WCDMA), and further to packet optimized 3.5G (HSPA) and 4G (LTE and LTE advanced) systems. Today, the main design challenges of mobile phone antenna are the requirements of small size, built-in structure, and multisystems in multibands, including all cellular 2G, 3G, 4G, and other noncellular radio-frequency (RF) bands, and moreover the need for a nice appearance and meeting all standards and requirements such as specific absorption rates (SARs), hearing aid compatibility (HAC), and over the air (OTA). This paper gives an overview of some important antenna designs and progress in mobile phones in the last 15 years, and presents the recent development on new antenna technology for LTE and compact multiple-input-multiple-output (MIMO) terminals.", "title": "" }, { "docid": "dcbfaec8966e10b8b87311f17bf9a3c5", "text": "The study presented here investigated the effects of emotional valence on the memory for words by assessing both memory performance and pupillary responses during a recognition memory task. Participants had to make speeded judgments on whether a word presented in the test phase of the experiment had already been presented (\"old\") or not (\"new\"). An emotion-induced recognition bias was observed: Words with emotional content not only produced a higher amount of hits, but also elicited more false alarms than neutral words. Further, we found a distinct pupil old/new effect characterized as an elevated pupillary response to hits as opposed to correct rejections. Interestingly, this pupil old/new effect was clearly diminished for emotional words. We therefore argue that the pupil old/new effect is not only able to mirror memory retrieval processes, but also reflects modulation by an emotion-induced recognition bias.", "title": "" }, { "docid": "74235290789c24ce00d54541189a4617", "text": "This article deals with an interesting application of Fractional Order (FO) Proportional Integral Derivative (PID) Controller for speed regulation in a DC Motor Drive. The design of five interdependent Fractional Order controller parameters has been formulated as an optimization problem based on minimization of set point error and controller output. The task of optimization was carried out using Artificial Bee Colony (ABC) algorithm. A comparative study has also been made to highlight the advantage of using a Fractional order PID controller over conventional PID control scheme for speed regulation of application considered. Extensive simulation results are provided to validate the effectiveness of the proposed approach.", "title": "" }, { "docid": "0868f1ccd67db523026f1650b03311ba", "text": "Conflict with humans over livestock and crops seriously undermines the conservation prospects of India's large and potentially dangerous mammals such as the tiger (Panthera tigris) and elephant (Elephas maximus). This study, carried out in Bhadra Tiger Reserve in south India, estimates the extent of material and monetary loss incurred by resident villagers between 1996 and 1999 in conflicts with large felines and elephants, describes the spatiotemporal patterns of animal damage, and evaluates the success of compensation schemes that have formed the mainstay of loss-alleviation measures. Annually each household lost an estimated 12% (0.9 head) of their total holding to large felines, and approximately 11% of their annual grain production (0.82 tonnes per family) to elephants. Compensations awarded offset only 5% of the livestock loss and 14% of crop losses and were accompanied by protracted delays in the processing of claims. Although the compensation scheme has largely failed to achieve its objective of alleviating loss, its implementation requires urgent improvement if reprisal against large wild mammals is to be minimized. Furthermore, innovative schemes of livestock and crop insurance need to be tested as alternatives to compensations.", "title": "" }, { "docid": "e756574e701c9ecc4e28da6135499215", "text": "MicroRNAs are small noncoding RNA molecules that regulate gene expression posttranscriptionally through complementary base pairing with thousands of messenger RNAs. They regulate diverse physiological, developmental, and pathophysiological processes. Recent studies have uncovered the contribution of microRNAs to the pathogenesis of many human diseases, including liver diseases. Moreover, microRNAs have been identified as biomarkers that can often be detected in the systemic circulation. We review the role of microRNAs in liver physiology and pathophysiology, focusing on viral hepatitis, liver fibrosis, and cancer. We also discuss microRNAs as diagnostic and prognostic markers and microRNA-based therapeutic approaches for liver disease.", "title": "" }, { "docid": "1585d7e1f1e6950949dc954c2d0bba51", "text": "The state-of-the-art techniques for aspect-level sentiment analysis focus on feature modeling using a variety of deep neural networks (DNN). Unfortunately, their practical performance may fall short of expectations due to semantic complexity of natural languages. Motivated by the observation that linguistic hints (e.g. explicit sentiment words and shift words) can be strong indicators of sentiment, we present a joint framework, SenHint, which integrates the output of deep neural networks and the implication of linguistic hints into a coherent reasoning model based on Markov Logic Network (MLN). In SenHint, linguistic hints are used in two ways: (1) to identify easy instances, whose sentiment can be automatically determined by machine with high accuracy; (2) to capture implicit relations between aspect polarities. We also empirically evaluate the performance of SenHint on both English and Chinese benchmark datasets. Our experimental results show that SenHint can effectively improve accuracy compared with the state-of-the-art alternatives.", "title": "" }, { "docid": "b28946a3a60875e94d4c4b33482604fd", "text": "People use social networks for different communication purposes, for example to share their opinion on ongoing events. One way to exploit this common knowledge is by using Sentiment Analysis and Natural Language Processing in order to extract useful information. In this paper we present a SA approach applied to a set of tweets related to a recent natural disaster in Italy; our goal is to identify tweets that may provide useful information from a disaster management perspective.", "title": "" }, { "docid": "92ac3bfdcf5e554152c4ce2e26b77315", "text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.", "title": "" }, { "docid": "ad00ba810df4c7295b89640c64b50e51", "text": "Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of participants made a commission error, the erroneous repetition of a PM response following intention completion. Comparisons between the PM groups and control group suggested that commission errors occurred in the absence of preparatory monitoring. Response time analyses additionally suggested that some participants experienced fatigue across the ongoing task block, and those who did were more susceptible to making a commission error. These results supported the hypothesis that commission errors can arise from the spontaneous retrieval of finished intentions and possibly the failure to exert executive control to oppose the PM response.", "title": "" }, { "docid": "61055bd3152c1bff75ee5e69b603b49b", "text": "This paper focuses on investigating immunological principles in designing a multi-agent system for intrusion/anomaly detection and response in networked computers. In this approach, the immunity-based agents roam around the machines (nodes or routers), and monitor the situation in the network (i.e. look for changes such as malfunctions, faults, abnormalities, misuse, deviations, intrusions, etc.). These agents can mutually recognize each other's activities and can take appropriate actions according to the underlying security policies. Specifically, their activities are coordinated in a hierarchical fashion while sensing, communicating and generating responses. Such an agent can learn and adapt to its environment dynamically and can detect both known and unknown intrusions. This research is the part of an effort to develop a multi-agent detection system that can simultaneously monitor networked computer's activities at different levels (such as user level, system level, process level and packet level) in order to determine intrusions and anomalies. The proposed intrusion detection system is designed to be flexible, extendible, and adaptable that can perform real-time monitoring in accordance with the needs and preferences of network administrators. This paper provides the conceptual view and a general framework of the proposed system. 1. Inspiration from the nature: Every organism in nature is constantly threatened by other organisms, and each species has evolved elaborate set of protective measures called, collectively, the immune system. The natural immune system is an adaptive learning system that is highly distributive in nature. It employs multi-level defense mechanisms to make rapid, highly specific and often very protective responses against wide variety of pathogenic microorganisms. The immune system is a subject of great research interest because of its powerful information processing capabilities [5,6]. Specifically, its' mechanisms to extract unique signatures from antigens and ability to recognize and classify dangerous antigenic peptides are very important. It also uses memory to remember signature patterns that have been seen previously, and use combinatorics to construct antibody for efficient detection. It is observed that the overall behavior of the system is an emergent property of several local interactions. Moreover, the immune response can be either local or systemic, depending on the route and property of the antigenic challenge [19]. The immune system is consists of different populations of immune cells (mainly B or T cells) which circulate at various primary and secondary lymphoid organs of the body. They are carefully controlled to ensure that appropriate populations of B and T cells (naive, effector, and memory) are recruited into different location [19]. This differential migration of lymphocyte subpopulations at different locations (organs) of the body is called trafficking or homing. The lymph nodes and organs provide specialized local environment (called germinal center) during pathogenic attack in any part of the body. This dynamic mechanism support to create a large number of antigen-specific lymphocytes (as effector and memory cells) for stronger defense through the process of the clonal expansion and differentiation. Interestingly, memory cells exhibit selective homing to the type of tissue in which they first encountered an antigen. Presumably this ensures that a particular memory cell will return to the location where it is most likely to re-encounter a subsequent antigenic challenge. The mechanisms of immune responses are self-regulatory in nature. There is no central organ that controls the functions of the immune system. The regulation of the clonal expansion and proliferation of B cells are closely regulated (with a co-stimulation) in order to prevent uncontrolled immune response. This second signal helps to ensure tolerance and judge between dangerous and harmless invaders. So the purpose of this accompanying signal in identifying a non-self is to minimize false alarm and to generate decisive response in case of a real danger[19]. 2. Existing works in Intrusion Detection: The study of security in computer networks is a rapidly growing area of interest because of the proliferation of networks (LANs, WANs etc.), greater deployment of shared computer databases (packages) and the increasing reliance of companies, institutions and individuals on such data. Though there are many levels of access protection to computing and network resources, yet the intruders are finding ways to entry into many sites and systems, and causing major damages. So the task of providing and maintaining proper security in a network system becomes a challenging issue. Intrusion/Anomaly detection is an important part of computer security. It provides an additional layer of defense against computer misuse (abuse) after physical, authentication and access control. There exist different methods for intrusion detection [7,23,25,29] and the early models include IDES (later versions NIDES and MIDAS), W & S, AudES, NADIR, DIDS, etc. These approaches monitor audit trails generated by systems and user applications and perform various statistical analyses in order to derive regularities in behavior pattern. These works based on the hypothesis that an intruder's behavior will be noticeably different from that of a legitimate user, and security violations can be detected by monitoring these audit trails. Most of these methods, however, used to monitor a single host [13,14], though NADIR and DIDS can collect and aggregate audit data from a number of hosts to detect intrusions. However, in all cases, there is no real analysis of patterns of network activities and they only perform centralized analysis. Recent works include GrIDS[27] which used hierarchical graphs to detect attacks on networked systems. Other approaches used autonomous agent architectures [1,2,26] for distributed intrusion detection. 3. Computer Immune Systems: The security in the field of computing may be considered as analogous to the immunity in natural systems. In computing, threats and dangers (of compromising privacy, integrity, and availability) may arise because of malfunction of components or intrusive activities (both internal and external). The idea of using immunological principles in computer security [9-11,15,16,18] started since 1994. Stephanie Forrest and her group at the University of New Mexico have been working on a research project with a long-term goal to build an artificial immune system for computers [911,15,16]. This immunity-based system has much more sophisticated notions of identity and protection than those afforded by current operating systems, and it is suppose to provide a general-purpose protection system to augment current computer security systems. The security of computer systems depends on such activities as detecting unauthorized use of computer facilities, maintaining the integrity of data files, and preventing the spread of computer viruses. The problem of protecting computer systems from harmful viruses is viewed as an instance of the more general problem of distinguishing self (legitimate users, uncorrupted data, etc.) from dangerous other (unauthorized users, viruses, and other malicious agents). This method (called the negative-selection algorithm) is intended to be complementary to the more traditional cryptographic and deterministic approaches to computer security. As an initial step, the negativeselection algorithm has been used as a file-authentication method on the problem of computer virus detection [9].", "title": "" }, { "docid": "7dbfea5517a34799634dc002c5a3e3c7", "text": "BACKGROUND\nSelf-esteem is the value that the individuals give themselves, and sexual self-concept is also a part of individuality or sexual-self. Impairment or disability exists not only in the physical body of disabled people but also in their attitudes. Negative attitudes affect the mental health of disabled people, causing them to have lower self-esteem.\n\n\nOBJECTIVES\nThis study aimed to examine the relationship between self-esteem and sexual self-concept in people with physical-motor disabilities.\n\n\nPATIENTS AND METHODS\nThis cross-sectional study was conducted on 200 random samples with physical-motor disabilities covered by Isfahan Welfare Organization in 2013. Data collection instruments were the Persian Eysenck self-esteem questionnaire, and five domains (sexual anxiety, sexual self-efficacy, sexual self-esteem, sexual fear and sexual depression) of the Persian multidimensional sexual self-concept questionnaire. Because of incomplete filling of the questionnaires, the data of 183 people were analyzed by the SPSS 16.0 software. Data were analyzed using the t-test, Man-Whitney and Kruskal-Wallis tests and Spearman correlation coefficient.\n\n\nRESULTS\nThe mean age was 36.88 ± 8.94 years for women and 37.80 ± 10.13 for men. The mean scores of self-esteem among women and men were 15.80 ± 3.08 and 16.2 ± 2.90, respectively and there was no statistically significance difference. Comparison of the mean scores of sexual anxiety, sexual self-efficacy, sexual self-esteem, sexual fear and sexual depression among men and women showed that women scored higher than men in all domains. This difference was statistically significant in other domains except the sexual self-esteem (14.92 ± 3.61 vs. 13.56 ± 4.52) (P < 0.05). The Kruskal-Wallis test showed that except for sexual anxiety and sexual self-esteem, there was a statistical difference between other domains of people's sexual self-concept and degree of disability (P < 0.05). Moreover, Spearman coefficient showed that there was only a correlation between men's sexual anxiety, sexual self-esteem and sexual self-efficacy with their self-esteem. This correlation was positive in sexual anxiety and negative in two other domains.\n\n\nCONCLUSIONS\nLack of difference in self-esteem of disabled people in different degrees of disability and in both men and women suggests that disabled people should not be presumed to have low self-esteem, and their different aspects of life should be attended to, just like others. Furthermore, studies should be designed and implemented based on psychological, social and environmental factors that can help disabled people to promote their positive sexual self-concept through marriage, and reduce their negative self-concept.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8d9be82bfc32a4631f1b1f24e1d962a9", "text": "Determine an optimal set of design parameter of PR whose DW fits a prescribed workspace as closely as possible is an important and foremost design task before manufacturing. In this paper, an optimal design method of a linear Delta robot (LDR) to obtain the prescribed cuboid dexterous workspace (PCDW) is proposed. The optical algorithms are based on the concept of performance chart. The performance chart shows the relationship between a criterion and design parameters graphically and globally. The kinematic problem is analyzed in brief to determine the design parameters and their relation. Two algorithms are designed to determine the maximal inscribed rectangle of dexterous workspace in the O-xy plane and plot the performance chart. As an applying example, a design result of the LDR with a prescribed cuboid dexterous workspace is presented. The optical results shown that every corresponding maximal inscribed rectangle can be obtained for every given RATE by the algorithm and the error of RATE is less than 0.05.The method and the results of this paper are very useful for the design and comparison of the parallel robot. Key-Words: Parallel Robot, Cuboid Dexterous Workspace, Optimal Design, performance chart ∗ This work is supported by Zhejiang Province Education Funded Grant #20051392.", "title": "" }, { "docid": "a9ae947533a57ab20c85c128acef9a4a", "text": "Recently, modal pushover analysis (MPA) has been developed to improve conventional pushover procedures by including higher-mode contributions to seismic demands. This study compares the seismic demands for vertically irregular frames determined by MPA procedure and the rigorous nonlinear response history analysis (RHA), due to an ensemble of 20 ground motions. Forty-eight irregular frames, all 12-story high with strong-columns and weak-beams, were designed with three types of irregularity— stiffness, strength, and combined stiffness and strength—introduced in eight different locations along the height using two modification factors. Next, the median and dispersion values of the ratio of story drift demands determined by modal pushover analysis (MPA) and nonlinear RHA were computed to measure the bias and dispersion of MPA estimates leading to the following results: (1) the bias in the MPA procedure does not increase, i.e., its accuracy does not deteriorate, in spite of irregularity in stiffness, strength, or stiffness and strength provided the irregularity is in the middle or upper story; (2) the MPA procedure is less accurate relative to the reference “regular” frame in estimating the seismic demands of frames with strong or stiff-and-strong first story; soft, weak, or soft-and-weak lower half; stiff, strong, or stiff-and-strong lower half; (3) in spite of the larger bias in estimating drift demands for some of the stories in particular cases, the MPA procedure identifies stories with largest drift demands and estimates them to a sufficient degree of accuracy, detecting critical stories in such frames; and (4) the bias in the MPA procedure for frames with soft, weak or soft-and-weak first story is about the same as for the “regular” frame.", "title": "" }, { "docid": "86b330069b20d410eb2186479fe7f500", "text": "Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities,whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier’s behavior in adversarial environments, and lead to better design choices micans infotech +91 90036 28940 +91 94435 11725 MICANS INFOTECH, NO: 8 , 100 FEET ROAD,PONDICHERRY. WWW.MICANSINFOTECH.COM ; MICANSINFOTECH@GMAIL.COM +91 90036 28940; +91 94435 11725 IEEE Projects 100% WORKING CODE + DOCUMENTATION+ EXPLAINATION – BEST PRICE LOW PRICE GUARANTEED", "title": "" }, { "docid": "34538b62e4a3320b4d251768f8703084", "text": "The problem of learning multiple consecutive tasks, known as lifelong learning, is of great importance to the creation of intelligent, general-purpose, and flexible machines. In this paper, we develop a method for online multi-task learning in the lifelong learning setting. The proposed Efficient Lifelong Learning Algorithm (ELLA) maintains a sparsely shared basis for all task models, transfers knowledge from the basis to learn each new task, and refines the basis over time to maximize performance across all tasks. We show that ELLA has strong connections to both online dictionary learning for sparse coding and state-of-the-art batch multi-task learning methods, and provide robust theoretical performance guarantees. We show empirically that ELLA yields nearly identical performance to batch multi-task learning while learning tasks sequentially in three orders of magnitude (over 1,000x) less time.", "title": "" }, { "docid": "bfb11c5c521934d1b4b282493ee033b0", "text": "This paper outlines the results of a driving simulator study conducted for the European CityMobil project, which was designed to investigate the effect of a highly automated driving scenario on driver behaviour. Drivers’ response to a number of ‘critical’ scenarios was compared in manual driving with that in automated driving. Drivers were in full control of the vehicle and its manoeuvres in the manual driving condition, whilst control of the vehicle was transferred to an ‘automated system’ in the automated driving condition. Automated driving involved the engagement of lateral and longitudinal controllers, which kept the vehicle in the centre of the lane and at a speed of 40 mph, respectively. Drivers were required to regain control of the driving task if the automated system was unable to handle a critical situation. An auditory alarm forewarned drivers of an imminent collision in such critical situations. Drivers’ response to all critical events was found to be much later in the automated driving condition, compared to manual driving. This is thought to be because drivers’ situation awareness was reduced during automated driving, with response only produced after drivers heard the alarm. Alternatively, drivers may have relied too heavily on the system, waiting for the auditory alarm before responding in a critical situation. These results suggest that action must be taken when implementing fully automated driving to ensure that the driver is kept in the loop at all times and is able to respond in time and appropriately during critical situations.", "title": "" }, { "docid": "9b9425132e89d271ed6baa0dbc16b941", "text": "Although personalized recommendation has been investigated for decades, the wide adoption of Latent Factor Models (LFM) has made the explainability of recommendations a critical issue to both the research community and practical application of recommender systems. For example, in many practical systems the algorithm just provides a personalized item recommendation list to the users, without persuasive personalized explanation about why such an item is recommended while another is not. Unexplainable recommendations introduce negative effects to the trustworthiness of recommender systems, and thus affect the effectiveness of recommendation engines. In this work, we investigate explainable recommendation in aspects of data explainability, model explainability, and result explainability, and the main contributions are as follows: 1. Data Explainability: We propose Localized Matrix Factorization (LMF) framework based Bordered Block Diagonal Form (BBDF) matrices, and further applied this technique for parallelized matrix factorization. 2. Model Explainability: We propose Explicit Factor Models (EFM) based on phrase-level sentiment analysis, as well as dynamic user preference modeling based on time series analysis. In this work, we extract product features and user opinions towards different features from large-scale user textual reviews based on phrase-level sentiment analysis techniques, and introduce the EFM approach for explainable model learning and recommendation. 3. Economic Explainability: We propose the Total Surplus Maximization (TSM) framework for personalized recommendation, as well as the model specification in different types of online applications. Based on basic economic concepts, we provide the definitions of utility, cost, and surplus in the application scenario of Web services, and propose the general framework of web total surplus calculation and maximization.", "title": "" }, { "docid": "a0d2f54ea60acaade93a7bb8b5c6d84d", "text": "The number of species of macro organisms on the planet is estimated at about 10 million. This staggering diversity and the need to better understand it led inevitably to the development of classification schemes called biological taxonomies. Unfortunately, in addition to this enormous diversity, the traditional identification and classification workflows are both slow and error-prone; classification expertise is in the hands of a small number of expert taxonomists; and to make things worse, the number of taxonomists has steadily declined in recent years. Automated identification of organisms has therefore become not just a long time desire but a need to better understand, use, and save biodiversity. This paper presents a survey of recent efforts to use computer vision and machine learning techniques to identify organisms. It focuses on the use of leaf images to identify plant species. In addition, it presents the main technical and scientific challenges as well as the opportunities for herbaria and cybertaxonomists to take a quantum leap towards identifying biodiversity efficiently and empowering the general public by putting in their hands automated identification tools.", "title": "" } ]
scidocsrr
2b0ea28b2909794bb3d627c718b42bf0
Title Energy-Efficient Deep In-memory Architecture for NAND Flash Memories
[ { "docid": "73284fdf9bc025672d3b97ca5651084a", "text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. Understanding, characterizing, and modeling the distribution of the threshold voltages across different cells in a modern multi-level cell (MLC) flash memory can enable the design of more effective and efficient error correction mechanisms to combat this degradation. We show the first published experimental measurement-based characterization of the threshold voltage distribution of flash memory. To accomplish this, we develop a testing infrastructure that uses the read retry feature present in some 2Y-nm (i.e., 20-24nm) flash chips. We devise a model of the threshold voltage distributions taking into account program/erase (P/E) cycle effects, analyze the noise in the distributions, and evaluate the accuracy of our model. A key result is that the threshold voltage distribution can be modeled, with more than 95% accuracy, as a Gaussian distribution with additive white noise, which shifts to the right and widens as P/E cycles increase. The novel characterization and models provided in this paper can enable the design of more effective error tolerance mechanisms for future flash memories.", "title": "" }, { "docid": "418a5ef9f06f8ba38e63536671d605c1", "text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.", "title": "" }, { "docid": "cf18799eeaf3c5f2b344c1bbbc15da7f", "text": "This paper presents a machine-learning classifier where the computation is performed within a standard 6T SRAM array. This eliminates explicit memory operations, which otherwise pose energy/performance bottlenecks, especially for emerging algorithms (e.g., from machine learning) that result in high ratio of memory accesses. We present an algorithm and prototype IC (in 130nm CMOS), where a 128×128 SRAM array performs storage of classifier models and complete classifier computations. We demonstrate a real application, namely digit recognition from MNIST-database images. The accuracy is equal to a conventional (ideal) digital/SRAM system, yet with 113× lower energy. The approach achieves accuracy >95% with a full feature set (i.e., 28×28=784 image pixels), and 90% when reduced to 82 features (as demonstrated on the IC due to area limitations). The energy per 10-way digit classification is 633pJ at a speed of 50MHz.", "title": "" } ]
[ { "docid": "3ed9c74b9dd7d80f921d1d31bc001ae6", "text": "INTRODUCTION\nMuscle thickness (MT) and muscle echo-intensity (EI) allow the study of skeletal muscle adaptive changes with ultrasound. This study investigates the intra- and inter-session reliability and agreement of MT and EI measurements for each of the four heads of the quadriceps femoris in transverse and longitudinal scans, using two sizes for the region of interest (ROI); EI measurements only.\n\n\nMETHODS\nThree B-mode images from two views were acquired from each head of quadriceps femoris from twenty participants (10 females) in two sessions, 7 days apart. EI was measured using a large and a small ROI. Reliability was examined with the mixed two-way intra-class correlation coefficient (ICC), the standard error of mean (SEM) and the smallest detectable change (SDC). Bland-Altman's plots were used to study agreement.\n\n\nRESULTS\nHigh to very high inter-session ICC values were found for MT for all muscle heads, particularly for measurements from transverse scans. For EI measurement, ICC values ranged from low to high, with higher ICC values seen with the largest ROI. SDC values ranged between 0.19 and 0.53 cm for MT and between 3.73 and 18.56 arbitrary units (a.u.) for two ROIs. Good agreement existed between MT measurements made in both scans. A small bias and larger 95% limits of agreement were seen for EI measurements collected with the two ROI sizes.\n\n\nCONCLUSION\nUltrasound measures of MT and EI show moderate to very high reliability. The reliability and agreement of MT and EI measurements are improved in transverse scans and with larger ROIs.", "title": "" }, { "docid": "9490f117f153a16152237a5a6b08c0a3", "text": "Evidence from macaque monkey tracing studies suggests connectivity-based subdivisions within the precuneus, offering predictions for similar subdivisions in the human. Here we present functional connectivity analyses of this region using resting-state functional MRI data collected from both humans and macaque monkeys. Three distinct patterns of functional connectivity were demonstrated within the precuneus of both species, with each subdivision suggesting a discrete functional role: (i) the anterior precuneus, functionally connected with the superior parietal cortex, paracentral lobule, and motor cortex, suggesting a sensorimotor region; (ii) the central precuneus, functionally connected to the dorsolateral prefrontal, dorsomedial prefrontal, and multimodal lateral inferior parietal cortex, suggesting a cognitive/associative region; and (iii) the posterior precuneus, displaying functional connectivity with adjacent visual cortical regions. These functional connectivity patterns were differentiated from the more ventral networks associated with the posterior cingulate, which connected with limbic structures such as the medial temporal cortex, dorsal and ventromedial prefrontal regions, posterior lateral inferior parietal regions, and the lateral temporal cortex. Our findings are consistent with predictions from anatomical tracer studies in the monkey, and provide support that resting-state functional connectivity (RSFC) may in part reflect underlying anatomy. These subdivisions within the precuneus suggest that neuroimaging studies will benefit from treating this region as anatomically (and thus functionally) heterogeneous. Furthermore, the consistency between functional connectivity networks in monkeys and humans provides support for RSFC as a viable tool for addressing cross-species comparisons of functional neuroanatomy.", "title": "" }, { "docid": "e1dcb6cdb2a8e33682557a1a0a57fdb6", "text": "Recently, a RGB image encryption algorithm based on DNA encoding and chaos map has been proposed. It was reported that the encryption algorithm can be broken with four pairs of chosen plain-images and the corresponding cipher-images. This paper re-evaluates the security of the encryption algorithm, and finds that the encryption algorithm can be broken efficiently with only one known plain-image. The effectiveness of the proposed known-plaintext attack is supported by both rigorous theoretical analysis and experimental results. In addition, two other security defects are also reported.", "title": "" }, { "docid": "e89b6a72083a8a88ad29d43e9e2ecc72", "text": "High-throughput screening (HTS) system has the capability to produce thousands of images containing the millions of cells. An expert could categorize each cell’s phenotype using visual inspection under a microscope. In fact, this manual approach is inefficient because image acquisition systems can produce massive amounts of cell image data per hour. Therefore, we propose an automated and efficient machine-learning model for phenotype detection from HTS system. Our goal is to find the most distinctive features (using feature selection and reduction), which will provide the best phenotype classification both in terms of accuracy and validation time from the feature pool. First, we used minimum redundancy and maximum relevance (MRMR) to select the most discriminant features and evaluate their corresponding impact on the model performance with a support vector machine (SVM) classifier. Second, we used principal component analysis (PCA) to reduce our feature to the most relevant feature list. The main difference is that MRMR does not transform the original features, unlike PCA. Later, we calculated an overall classification accuracy of original features (i.e., 1025 features) and compared with feature selection and reduction accuracies (∼30 features). The feature selection method gives the highest accuracy than reduction and original features. We validated and evaluated our model against well-known benchmark problem (i.e. Hela dataset) with a classification accuracy of 92.70% and validation time in 0.41 seconds.", "title": "" }, { "docid": "f6b9e4de676fb61df71c2fecc5ea5a0d", "text": "Estimating the pose of objects from range data is a problem of considerable practical importance for many vision applications. This paper presents an approach for accurate and efficient 3D pose estimation from 2.5D range images. Initialized with an approximate pose estimate, the proposed approach refines it so that it accurately accounts for an acquired range image. This is achieved by using a hypothesize-and-test scheme that combines Particle Swarm Optimization (PSO) and graphicsbased rendering to minimize a cost function of object pose that quantifies the misalignment between the acquired and a hypothesized, rendered range image. Extensive experimental results demonstrate the superior performance of the approach compared to the Iterative Closest Point (ICP) algorithm that is commonly used for pose refinement.", "title": "" }, { "docid": "3c876ddb6922c8ac14d619000f121136", "text": "MANETs are an upcoming technology that is gaining momentum in recent years. Due to their unique characteristics, MANETs are suffering from wide range of security attacks. Wormhole is a common security issue encounter in MANETs routing protocol. A new routing protocol naming extended prime product number (EPPN) based on the hop count model is proposed in this article. Here hop count between source & destination is obtained depending upon the current active route. This hop count model is integrated into AODV protocol. In the proposed scheme firstly the route is selected on the basis of RREP and then hop count model calculates the hop count between source & destination. Finally wormhole DETECTION procedure will be started if the calculated hop count is greater than the received hop count in the route to get out the suspected nodes.", "title": "" }, { "docid": "797301307659377049b04ff1c02ca6ec", "text": "Spectrograms of speech and audio signals are time-frequency densities, and by construction, they are non-negative and do not have phase associated with them. Under certain conditions on the amount of overlap between consecutive frames and frequency sampling, it is possible to reconstruct the signal from the spectrogram. Deviating from this requirement, we develop a new technique to incorporate the phase of the signal in the spectrogram by satisfying what we call as the delta dominance condition, which in general is different from the well known minimum-phase condition. In fact, there are signals that are delta dominant but not minimum-phase and vice versa. The delta dominance condition can be satisfied in multiple ways, for example by placing a Kronecker impulse of the right amplitude or by choosing a suitable window function. A direct consequence of this novel way of constructing the spectrograms is that the phase of the signal is directly encoded or embedded in the spectrogram. We also develop a reconstruction methodology that takes such phase-encoded spectrograms and obtains the signal using the discrete Fourier transform (DFT). It is envisaged that the new class of phase-encoded spectrogram representations would find applications in various speech processing tasks such as analysis, synthesis, enhancement, and recognition.", "title": "" }, { "docid": "5e3d7397a1ba9d8097ebac82ca9ae65c", "text": "Despite advances in the evaluation, treatment, and pathophysiological understanding of necrotizing soft-tissue infections, Fournier's gangrene remains a life-threatening urological emergency. Although the condition can affect patients of any age and gender, it might be more prevalent in some high-risk groups with certain comorbidities. Several prognostic and diagnostic tools have been developed to assist with clinical decision-making once the diagnosis is made — primarily based on the physician's physical exam and potentially supported by laboratory and imaging findings. Expedited treatment with resuscitation, antibiotic administration, and rapid, wide surgical debridement are key elements of the initial management. These procedures must be followed by meticulous wound care and liberal use of planned subsequent surgical debridements. Once the patient has overcome the associated systemic illness, several reconstructive options for the genitalia and perineum can be considered to improve functionality and cosmesis.", "title": "" }, { "docid": "3817c02b7cc8846553854f270d236047", "text": "The annualized interest rate for a payday loan often exceeds 10 times that of a typical credit card, yet this market grew immensely in the 1990s and 2000s, elevating concerns about the risk payday loans pose to consumers and whether payday lenders target minority neighborhoods. This paper employs individual credit record data, and Census data on payday lender store locations, to assess these concerns. Taking advantage of several state law changes since 2006 and, following previous work, within-state-year differences in access arising from proximity to states that allow payday loans, I find little to no effect of payday loans on credit scores, new delinquencies, or the likelihood of overdrawing credit lines. The analysis also indicates that neighborhood racial composition has little influence on payday lender store locations conditional on income, wealth and demographic characteristics. JEL Codes: D14, G2", "title": "" }, { "docid": "9f01314a03290cf3d481f731648eb138", "text": "Recent advances in hardware and software for mobile computing have enabled a new breed of mobile AR systems and applications. A new breed of computing called “augmented ubiquitous computing” has resulted from the convergence of wearable computing, wireless networking and mobile AR interfaces. In this paper we provide a survey of different mobile and wireless technologies and how they have impact AR. Our goal is to place them into different categories so that it becomes easier to understand the state of art and to help identify new directions of research.", "title": "" }, { "docid": "a541260619ab3026451fab57d11ee276", "text": "A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.", "title": "" }, { "docid": "40c4175be1573d9542f6f9f859fafb01", "text": "BACKGROUND\nFalls are a major threat to the health and independence of seniors. Regular physical activity (PA) can prevent 40% of all fall injuries. The challenge is to motivate and support seniors to be physically active. Persuasive systems can constitute valuable support for persons aiming at establishing and maintaining healthy habits. However, these systems need to support effective behavior change techniques (BCTs) for increasing older adults' PA and meet the senior users' requirements and preferences. Therefore, involving users as codesigners of new systems can be fruitful. Prestudies of the user's experience with similar solutions can facilitate future user-centered design of novel persuasive systems.\n\n\nOBJECTIVE\nThe aim of this study was to investigate how seniors experience using activity monitors (AMs) as support for PA in daily life. The addressed research questions are as follows: (1) What are the overall experiences of senior persons, of different age and balance function, in using wearable AMs in daily life?; (2) Which aspects did the users perceive relevant to make the measurements as meaningful and useful in the long-term perspective?; and (3) What needs and requirements did the users perceive as more relevant for the activity monitors to be useful in a long-term perspective?\n\n\nMETHODS\nThis qualitative interview study included 8 community-dwelling older adults (median age: 83 years). The participants' experiences in using two commercial AMs together with tablet-based apps for 9 days were investigated. Activity diaries during the usage and interviews after the usage were exploited to gather user experience. Comments in diaries were summarized, and interviews were analyzed by inductive content analysis.\n\n\nRESULTS\nThe users (n=8) perceived that, by using the AMs, their awareness of own PA had increased. However, the AMs' impact on the users' motivation for PA and activity behavior varied between participants. The diaries showed that self-estimated physical effort varied between participants and varied for each individual over time. Additionally, participants reported different types of accomplished activities; talking walks was most frequently reported. To be meaningful, measurements need to provide the user with a reliable receipt of whether his or her current activity behavior is sufficient for reaching an activity goal. Moreover, praise when reaching a goal was described as motivating feedback. To be useful, the devices must be easy to handle. In this study, the users perceived wearables as easy to handle, whereas tablets were perceived difficult to maneuver. Users reported in the diaries that the devices had been functional 78% (58/74) of the total test days.\n\n\nCONCLUSIONS\nActivity monitors can be valuable for supporting seniors' PA. However, the potential of the solutions for a broader group of seniors can significantly be increased. Areas of improvement include reliability, usability, and content supporting effective BCTs with respect to increasing older adults' PA.", "title": "" }, { "docid": "38ec75b8195ace3cec2b771e87ef3885", "text": "With the proliferation of social networks and blogs, the Internet is increasingly being used to disseminate personal health information rather than just as a source of information. In this paper we exploit the wealth of user-generated data, available through the micro-blogging service Twitter, to estimate and track the incidence of health conditions in society. The method is based on two stages: we start by extracting possibly relevant tweets using a set of specially crafted regular expressions, and then classify these initial messages using machine learning methods. Furthermore, we selected relevant features to improve the results and the execution times. To test the method, we considered four health states or conditions, namely flu, depression, pregnancy and eating disorders, and two locations, Portugal and Spain. We present the results obtained and demonstrate that the detection results and the performance of the method are improved after feature selection. The results are promising, with areas under the receiver operating characteristic curve between 0.7 and 0.9, and f-measure values around 0.8 and 0.9. This fact indicates that such approach provides a feasible solution for measuring and tracking the evolution of health states within the society.", "title": "" }, { "docid": "7e788eb9ff8fd10582aa94a89edb10a2", "text": "This paper recasts the problem of feature location in source code as a decision-making problem in the presence of uncertainty. The solution to the problem is formulated as a combination of the opinions of different experts. The experts in this work are two existing techniques for feature location: a scenario-based probabilistic ranking of events and an information-retrieval-based technique that uses latent semantic indexing. The combination of these two experts is empirically evaluated through several case studies, which use the source code of the Mozilla Web browser and the Eclipse integrated development environment. The results show that the combination of experts significantly improves the effectiveness of feature location as compared to each of the experts used independently", "title": "" }, { "docid": "81243e721527e74f0997d6aeb250cc23", "text": "This paper compares the attributes of 36 slot, 33 slot and 12 slot brushless interior permanent magnet motor designs, each with an identical 10 pole interior magnet rotor. The aim of the paper is to quantify the trade-offs between alternative distributed and concentrated winding configurations taking into account aspects such as thermal performance, field weakening behaviour, acoustic noise, and efficiency. It is found that the concentrated 12 slot design gives the highest theoretical performance however significant rotor losses are found during testing and a large amount of acoustic noise and vibration is generated. The 33 slot design is found to have marginally better performance than the 36 slot but it also generates some unbalanced magnetic pull on the rotor which may lead to mechanical issues at higher speeds.", "title": "" }, { "docid": "60511dbd1dbb4c01881dac736dd7f988", "text": "The current study reconceptualized self-construal as a social cognitive indicator of self-observation that individuals employ for developing and maintaining social relationship with others. From the social cognitive perspective, this study investigated how consumers’ self-construal can affect consumers’ electronic word of mouth (eWOM) behavior through two cognitive factors (online community engagement self-efficacy and social outcome expectations) in the context of a social networking site. This study conducted an online experiment that directed 160 participants to visit a newly created online community. The results demonstrated that consumers’ relational view became salient when the consumers’ self-construal was primed to be interdependent rather than independent. Further, the results showed that such interdependent self-construal positively influenced consumers’ eWOM behavioral intentions through their community engagement self-efficacy and their social outcome expectations. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "62132ea78d0b5aa844ff25647159eedb", "text": "Gate turn offs (GTOs) have an inherent minimum ON-OFF time, which is needed for their safe operation. For GTO-based three-level or neutral-point-clamped (NPC) inverters, this minimum ON-OFF pulsewidth limitation results in a distortion of the output voltage and current waveforms, especially in the low index modulation region. Some approaches have been previously proposed to compensate for the minimum ON pulse. However, these methods increase the inverter switching losses. Two new methods of pulsewidth-modulation (PWM) control based on: 1) adding a bias to the reference voltage of the inverter and 2) switching patterns are presented. The former method improves the output waveforms, but increases the switching losses; while the latter improves the output waveforms without increasing the switching losses. The fluctuations of the neutral-point voltage are also reduced using this method. The theoretical and practical aspects as well as the experimental results are presented in this paper.", "title": "" }, { "docid": "b20aa2222759644b4b60b5b450424c9e", "text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0bbfd07d0686fc563f156d75d3672c7b", "text": "In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.", "title": "" }, { "docid": "826e01210bb9ce8171ed72043b4a304d", "text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.", "title": "" } ]
scidocsrr
692f48643ce7ceb2d3e9039f823dd1bb
Data Mining techniques for the detection of fraudulent financial statements
[ { "docid": "2f9ebb8992542b8d342642b6ea361b54", "text": "Falsifying Financial Statements involves the manipulation of financial accounts by overstating assets, sales and profit, or understating liabilities, expenses, or losses. This paper explores the effectiveness of an innovative classification methodology in detecting firms that issue falsified financial statements (FFS) and the identification of the factors associated to FFS. The methodology is based on the concepts of multicriteria decision aid (MCDA) and the application of the UTADIS classification method (UTilités Additives DIScriminantes). A sample of 76 Greek firms (38 with FFS and 38 non-FFS) described over ten financial ratios is used for detecting factors associated with FFS. A Jackknife procedure approach is employed for model validation and comparison with multivariate statistical techniques, namely discriminant and logit analysis. The results indicate that the proposed MCDA methodology outperforms traditional statistical techniques which are widely used for FFS detection purposes. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of FFS and highlight the importance of financial ratios such as the total debt to total assets ratio, the inventories to sales ratio, the net profit to sales ratio and the sales to total assets ratio.", "title": "" }, { "docid": "113373d6a9936e192e5c3ad016146777", "text": "This paper examines published data to develop a model for detecting factors associated with false financia l statements (FFS). Most false financial statements in Greece can be identified on the basis of the quantity and content of the qualification s in the reports filed by the auditors on the accounts. A sample of a total of 76 firms includes 38 with FFS and 38 non-FFS. Ten financial variables are selected for examination as potential predictors of FFS. Univariate and multivariate statistica l techniques such as logistic regression are used to develop a model to identify factors associated with FFS. The model is accurate in classifying the total sample correctly with accuracy rates exceeding 84 per cent. The results therefore demonstrate that the models function effectively in detecting FFS and could be of assistance to auditors, both internal and external, to taxation and other state authorities and to the banking system. the empirical results and discussion obtained using univariate tests and multivariate logistic regression analysis. Finally, in the fifth section come the concluding remarks.", "title": "" }, { "docid": "eed4d069544649b2c80634bdacbda372", "text": "Data mining tools become important in finance and accounting. Their classification and prediction abilities enable them to be used for the purposes of bankruptcy prediction, going concern status and financial distress prediction, management fraud detection, credit risk estimation, and corporate performance prediction. This study aims to provide a state-of-the-art review of the relative literature and to indicate relevant research opportunities.", "title": "" } ]
[ { "docid": "0b6f3498022abdf0407221faba72dcf1", "text": "A broadband coplanar waveguide (CPW) to coplanar strip (CPS) transmission line transition directly integrated with an RF microelectromechanical systems reconfigurable multiband antenna is presented in this paper. This transition design exhibits very good performance up to 55 GHz, and uses a minimum number of dissimilar transmission line sections and wire bonds, achieving a low-loss and low-cost balancing solution to feed planar antenna designs. The transition design methodology that was followed is described and measurement results are presented.", "title": "" }, { "docid": "be115d8bd86e1ef81f8056a2e97a3f01", "text": "Sepsis remains a major cause of mortality and morbidity in neonates, and, as a consequence, antibiotics are the most frequently prescribed drugs in this vulnerable patient population. Growth and dynamic maturation processes during the first weeks of life result in large inter- and intrasubject variability in the pharmacokinetics (PK) and pharmacodynamics (PD) of antibiotics. In this review we (1) summarize the available population PK data and models for primarily renally eliminated antibiotics, (2) discuss quantitative approaches to account for effects of growth and maturation processes on drug exposure and response, (3) evaluate current dose recommendations, and (4) identify opportunities to further optimize and personalize dosing strategies of these antibiotics in preterm and term neonates. Although population PK models have been developed for several of these drugs, exposure-response relationships of primarily renally eliminated antibiotics in these fragile infants are not well understood, monitoring strategies remain inconsistent, and consensus on optimal, personalized dosing of these drugs in these patients is absent. Tailored PK/PD studies and models are useful to better understand relationships between drug exposures and microbiological or clinical outcomes. Pharmacometric modeling and simulation approaches facilitate quantitative evaluation and optimization of treatment strategies. National and international collaborations and platforms are essential to standardize and harmonize not only studies and models but also monitoring and dosing strategies. Simple bedside decision tools assist clinical pharmacologists and neonatologists in their efforts to fine-tune and personalize the use of primarily renally eliminated antibiotics in term and preterm neonates.", "title": "" }, { "docid": "36d9bbd435fafca98af3024c2fd19616", "text": "Œere is an increasing demand for algorithms to explain their outcomes. So far, there is no method that explains the rankings produced by a ranking algorithm. To address this gap we propose LISTEN, a LISTwise ExplaiNer, to explain rankings produced by a ranking algorithm. To eciently use LISTEN in production, we train a neural network to learn the underlying explanation space created by LISTEN; we call this model Q-LISTEN. We show that LISTEN produces faithful explanations and that Q-LISTEN is able to learn these explanations. Moreover, we show that LISTEN is safe to use in a real world environment: users of a news recommendation system do not behave signi€cantly di‚erently when they are exposed to explanations generated by LISTEN instead of manually generated explanations.", "title": "" }, { "docid": "e0400a04d85641f7a658d9c55295997d", "text": "End-to-end encryption has been heralded by privacy and security researchers as an effective defence against dragnet surveillance, but there is no evidence of widespread end-user uptake. We argue that the non-adoption of end-toend encryption might not be entirely due to usability issues identified by Whitten and Tygar in their seminal paper “Why Johnny Can’t Encrypt”. Our investigation revealed a number of fundamental issues such as incomplete threat models, misaligned incentives, and a general absence of understanding of the email architecture. From our data and related research literature we found evidence of a number of potential explanations for the low uptake of end-to-end encryption. This suggests that merely increasing the availability and usability of encryption functionality in email clients will not automatically encourage increased deployment by email users. We shall have to focus, first, on building comprehensive end-user mental models related to email, and email security. We conclude by suggesting directions for future research.", "title": "" }, { "docid": "ccddd7df2b5246c44d349bfb0aae499a", "text": "We consider stochastic multi-armed bandit problems with complex actions over a set of basic arms, where the decision maker plays a complex action rather than a basic arm in each round. The reward of the complex action is some function of the basic arms’ rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. Thus, feedback across complex actions may be coupled due to the nature of the reward function. We prove a frequentist regret bound for Thompson sampling in a very general setting involving parameter, action and observation spaces and a likelihood function over them. The bound holds for discretely-supported priors over the parameter space without additional structural properties such as closed-form posteriors, conjugate prior structure or independence across arms. The regret bound scales logarithmically with time but, more importantly, with an improved constant that non-trivially captures the coupling across complex actions due to the structure of the rewards. As applications, we derive improved regret bounds for classes of complex bandit problems involving selecting subsets of arms, including the first nontrivial regret bounds for nonlinear MAX reward feedback from subsets.", "title": "" }, { "docid": "c1ffc050eaee547bd0eb070559ffc067", "text": "This paper proposes a method for designing a sentence set for utterances taking account of prosody. This method is based on a measure of coverage which incorporates two factors: (1) the distribution of voice fundamental frequency and phoneme duration predicted by the prosody generation module of a TTS; (2) perceptual damage to naturalness due to prosody modification. A set of 500 sentences with a predicted coverage of 82.6% was designed by this method, and used to collect a speech corpus. The obtained speech corpus yielded 88% of the predicted coverage. The data size was reduced to 49% in terms of number of sentences (89% in terms of number of phonemes) compared to a general-purpose corpus designed without taking prosody into account.", "title": "" }, { "docid": "a70e664e2fcea37836cc55096295c4f4", "text": "This article reviews published data on familial recurrent hydatidiform mole with particular reference to the genetic basis of this condition, the likely outcome of subsequent pregnancies in affected women and the risk of persistent trophoblastic disease following molar pregnancies in these families. Familial recurrent hydatidiform mole is characterized by recurrent complete hydatidiform moles of biparental, rather than the more usual androgenetic, origin. Although the specific gene defect in these families has not been identified, genetic mapping has shown that in most families the gene responsible is located in a 1.1 Mb region on chromosome 19q13.4. Mutations in this gene result in dysregulation of imprinting in the female germ line with abnormal development of both embryonic and extraembryonic tissue. Subsequent pregnancies in women diagnosed with this condition are likely to be complete hydatidiform moles. In 152 pregnancies in affected women, 113 (74%) were complete hydatidiform moles, 26 (17%) were miscarriages, 6 (4%) were partial hydatidiform moles, and 7 (5%) were normal pregnancies. Molar pregnancies in women with familial recurrent hydatidiform mole have a risk of progressing to persistent trophoblastic disease similar to that of androgenetic complete hydatidiform mole.", "title": "" }, { "docid": "be07151d662636a2e0f2d3805fba3181", "text": "Appropriate automation trust is a prerequisite for safe, comfortable and efficient use of highly automated driving systems (HADS). Earlier research indicates that a drivers’ nationality and Take-Over Requests (TOR) due to imperfect system reliability might affect trust, but this has never been investigated in the context of highly automated driving. A driving simulator study (N = 80) showed that TORs only temporarily lowered trust in HADSs, and revealed similarities in trust formation between German and Chinese drivers. Trust was significantly higher after experiencing the system than before, both for German and Chinese participants. However, Chinese drivers reported significantly higher automation mistrust than German drivers. Self-report measures of automation trust were not connected to behavioral measures. The results support a distinction between automation trust and mistrust as separate constructs, shortand long-term effects of TORs on automation trust, and cultural differences in automation trust.", "title": "" }, { "docid": "5aed256aaca0a1f2fe8a918e6ffb62bd", "text": "Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes “classifiers” for the unseen classes. Then, we define an auxiliary task of synthesizing “exemplars” for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Soravit Changpinyo Google AI E-mail: schangpi@google.com Wei-Lun Chao Cornell University, Department of Computer Science E-mail: weilunchao760414@gmail.com Boqing Gong Tencent AI Lab E-mail: boqinggo@outlook.com Fei Sha University of Southern California, Department of Computer Science E-mail: feisha@usc.edu", "title": "" }, { "docid": "121f3b0f438fc1eca67a9b7051c81d37", "text": "I examine whether easily observable variables such as beauty, race, and the way a loan applicant presents himself affect lenders’ decisions, once hard financial information about credit scores, employment history, homeownership, and other financial information are taken into account. I use data from Prosper.com, a 150 million dollars online lending market in which borrowers post loan requests that include verifiable financial information, photos, an offered interest rate, and related context. Borrowers whose beauty is rated above average are 1.41 percentage points more likely to get a loan and, given a loan, pay 81 basis points less than an average-looking borrower with the same credentials. Black borrowers pay between 139 and 146 basis points more than otherwise similar White borrowers, although they are not more likely to become delinquent. Similarity between borrowers and lenders has also a powerful impact on lenders’ decisions. In my sample personal characteristics are not, all else equal, significantly related to subsequent delinquency rates with the exception of beauty, which is associated with substantially higher delinquency probability. The findings are consistent with personal characteristics affecting loan supply through lenders’ preferences (taste-based discrimination a la Becker) and perception, rather than statistical discrimination based on inferences from their previous experience.", "title": "" }, { "docid": "1a1467aa70bbcc97e01a6ec25899bb17", "text": "Despite numerous studies to reduce the power consumption of the display-related components of mobile devices, previous works have led to a deterioration in user experience due to compromised graphic quality. In this paper, we propose an effective scheme to reduce the energy consumption of the display subsystems of mobile devices without compromising user experience. In preliminary experiments, we noticed that mobile devices typically perform redundant display updates even if the display content does not change. Based on this observation, we first propose a metric called the content rate, which is defined as the number of meaningful frame changes in a second. Our scheme then estimates an optimal refresh rate based on the content rate in order to eliminate redundant display updates. Also proposed is the flicker compensation technique, which prevents the flickering problem caused by the reduced refresh rate. Extensive experiments conducted on the latest smartphones demonstrated that our system effectively reduces the overall power consumption of mobile devices by 35 percent while simultaneously maintaining satisfactory display quality.", "title": "" }, { "docid": "1dfa61f341919dcb4169c167a92c2f43", "text": "This paper presents an algorithm for the detection of micro-crack defects in the multicrystalline solar cells. This detection goal is very challenging due to the presence of various types of image anomalies like dislocation clusters, grain boundaries, and other artifacts due to the spurious discontinuities in the gray levels. In this work, an algorithm featuring an improved anisotropic diffusion filter and advanced image segmentation technique is proposed. The methods and procedures are assessed using 600 electroluminescence images, comprising 313 intact and 287 defected samples. Results indicate that the methods and procedures can accurately detect micro-crack in solar cells with sensitivity, specificity, and accuracy averaging at 97%, 80%, and 88%, respectively.", "title": "" }, { "docid": "b322d7c4f4222f98422a822d9a4e43d6", "text": "Workflow management systems (WFMSs) have attracted a lot of interest both in academia and the business community. A workflow consists of a collection of tasks that are organized to facilitate some business process specification. To simplify the complexity of security administration, it is common to use role-based access control (RBAC) to grant authorization to roles and users. Typically, security policies are expressed as constraints on users, roles, tasks and the workflow itself. A workflow system can become very complex and involve several organizations or different units of an organization, thus the number of security policies may be very large and their interactions very complex. It is clearly important to know whether the existence of such constraints will prevent certain instances of the workflow from completing. Unfortunately, no existing constraint models have considered this problem satisfactorily. In this paper, we define a model for constrained workflow systems that includes local and global cardinality constraints, separation of duty constraints and binding of duty constraints. We define the notion of a workflow specification and of a constrained workflow authorization schema. Our main result is to establish necessary and sufficient conditions for the set of constraints that ensure a sound constrained workflow authorization schema, that is, for any user or any role who are authorized to a task, there is at least one complete workflow instance when this user or this role executes this task.", "title": "" }, { "docid": "c3e8960170cb72f711263e7503a56684", "text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.", "title": "" }, { "docid": "70e3a918cb152278360c2c54a8934b2c", "text": "In translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a cross-sentence context-aware approach and investigate the influence of historical contextual information on the performance of neural machine translation (NMT). First, this history is summarized in a hierarchical way. We then integrate the historical representation into NMT in two strategies: 1) a warm-start of encoder and decoder states, and 2) an auxiliary context source for updating decoder states. Experimental results on a large Chinese-English translation task show that our approach significantly improves upon a strong attention-based NMT system by up to +2.1 BLEU points.", "title": "" }, { "docid": "cdeb21f612717d8e77aedc6a155a7948", "text": "In this paper we present the Transalg system, designed to pro duce SAT encodings for discrete functions, written as programs i n a specific language. Translation of such programs to SAT is based on propositiona l e coding methods for formal computing models and on the concept of symboli c execution. We used the Transalg system to make SAT encodings for a number of cryptographic functions.", "title": "" }, { "docid": "0d1da055e444a90ec298a2926de9fe7b", "text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.", "title": "" }, { "docid": "cdb4d8a7b1654e73693420c7ce5936d0", "text": "As the current MOSFET scaling trend is facing strong limitations, technologies exploiting novel degrees of freedom at physical and architecture level are promising candidates to enable the continuation of Moore's predictions. In this paper, we report on the fabrication of novel ambipolar Silicon nanowire (SiNW) Schottky-barrier (SB) FET transistors featuring two independent gate-all-around electrodes and vertically stacked SiNW channels. A top-down approach was employed for the nanowire fabrication, using an e-beam lithography defined design pattern. In these transistors, one gate electrode enables the dynamic configuration of the device polarity (n - or p-type) by electrostatic doping of the channel in proximity of the source and drain SBs. The other gate electrode, acting on the center region of the channel switches ON or OFF the device. Measurement results on silicon show Ion/Ioff >106 and subthreshold slopes approaching the thermal limit, ≈ 64 mV/dec (70 mV/dec) for p(n)-type operation in the same physical device. Finally, we show that the XOR logic operation is embedded in the device characteristic, and we demonstrate for the first time a fully functional two-transistor XOR gate.", "title": "" }, { "docid": "9e065776a9f4283a6eb74e315c36a4b4", "text": "We introduce a new family of positive-definite kernels for large margin classification in support vector machines (SVMs). These kernels mimic the computation in large neural networks with one layer of hidden units. We also show how to derive new kernels, by recursive composition, that may be viewed as mapping their inputs through a series of nonlinear feature spaces. These recursively derived kernels mimic the computation in deep networks with multiple hidden layers. We evaluate SVMs with these kernels on problems designed to illustrate the advantages of deep architectures. Compared to previous benchmarks, we find that on some problems, these SVMs yield state-of-the-art results, beating not only other SVMs but also deep belief nets.", "title": "" } ]
scidocsrr
1ff14bad3f3a373ac567fc5be6b99f5d
How to Achieve High Classification Accuracy with Just a Few Labels: A Semi-supervised Approach Using Sampled Packets
[ { "docid": "9b203633a5ee7dbb93bd75bd345f4967", "text": "IP traffic classification has been a vitally important topic that attracts persistent interest in the networking and machine learning communities for past decades. While there exist quite a number of works applying machine learning techniques to realize IP traffic classification, most works suffer from limitations like either heavily depending on handcrafted features or be only able to handle offline traffic classification. To get rid of the aforementioned weakness, in this paper, we propose our online Convolutional Neural Networks (CNNs) based traffic classification framework named Seq2Img. The basic idea is to employ a compact nonparametric kernel embedding based method to convert early flow sequences into images which fully capture the static and dynamic behaviors of different applications and avoid using handcrafted features that might cause loss of information. A CNN is then applied on the generated images to obtain traffic classification results. Experiments on real network traffic are conducted and encouraging results justify the efficacy of our proposed approach.", "title": "" }, { "docid": "9b17c6ff30e91f88e52b2db4eb331478", "text": "Network traffic classification has become significantly important with rapid growth of current Internet network and online applications. There have been numerous studies on this topic which have led to many different approaches. Most of these approaches use predefined features extracted by an expert in order to classify network traffic. In contrast, in this study, we propose a deep learning based approach which integrates both feature extraction and classification phases into one system. Our proposed scheme, called “Deep Packet,” can handle both traffic characterization, in which the network traffic is categorized into major classes (e.g., FTP and P2P), and application identification in which identification of end-user applications (e.g., BitTorrent and Skype) is desired. Contrary to the most of current methods, Deep Packet can identify encrypted traffic and also distinguishes between VPN and non-VPN network traffic. After an initial pre-processing phase on data, packets are fed into Deep Packet framework that embeds stacked autoencoder and convolution neural network (CNN) in order to classify network traffic. Deep packet with CNN as its classification model achieved F1 score of 0.95 in application identification task and it also accomplished F1 score of 0.97 in traffic characterization task. To the best of our knowledge, Deep Packet outperforms all of the proposed classification methods on UNB ISCX VPN-nonVPN dataset.", "title": "" }, { "docid": "13eaa316c8e41a9cc3807d60ba72db66", "text": "This is a short paper introducing pitfalls when implementing averaged scores. Although, it is common to compute averaged scores, it is good to specify in detail how the scores are computed.", "title": "" } ]
[ { "docid": "ad808ef13f173eda961b6157a766f1a9", "text": "Written text often provides sufficient clues to identify the author, their gender, age, and other important attributes. Consequently, the authorship of training and evaluation corpora can have unforeseen impacts, including differing model performance for different user groups, as well as privacy implications. In this paper, we propose an approach to explicitly obscure important author characteristics at training time, such that representations learned are invariant to these attributes. Evaluating on two tasks, we show that this leads to increased privacy in the learned representations, as well as more robust models to varying evaluation conditions, including out-of-domain corpora.", "title": "" }, { "docid": "323abed1a623e49db50bed383ab26a92", "text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.", "title": "" }, { "docid": "d7c2f092c8442434d5add23162be20e6", "text": "Since its publication in 2007, the Tokyo Guidelines for the management of acute cholangitis and cholecystitis (TG07) have been widely adopted. The validation of TG07 conducted in terms of clinical practice has shown that the diagnostic criteria for acute cholecystitis are highly reliable but that the definition of definite diagnosis is ambiguous. Discussion by the Tokyo Guidelines Revision Committee concluded that acute cholecystitis should be suspected when Murphy's sign, local inflammatory findings in the gallbladder such as right upper quadrant abdominal pain and tenderness, and fever and systemic inflammatory reaction findings detected by blood tests are present but that definite diagnosis of acute cholecystitis can be made only on the basis of the imaging of ultrasonography, computed tomography or scintigraphy (HIDA scan). These proposed diagnostic criteria provided better specificity and accuracy rates than the TG07 diagnostic criteria. As for the severity assessment criteria in TG07, there is evidence that TG07 resulted in clarification of the concept of severe acute cholecystitis. Furthermore, there is evidence that severity assessment in TG07 has led to a reduction in the mean duration of hospital stay. As for the factors used to establish a moderate grade of acute cholecystitis, such as leukocytosis, ALP, old age, diabetes, being male, and delay in admission, no new strong evidence has been detected indicating that a change in the criteria used in TG07 is needed. Therefore, it was judged that the severity assessment criteria of TG07 could be applied in the updated Tokyo Guidelines (TG13) with minor changes. TG13 presents new standards for the diagnosis, severity grading and management of acute cholecystitis. Free full-text articles and a mobile application of TG13 are available via http://www.jshbps.jp/en/guideline/tg13.html.", "title": "" }, { "docid": "fc50b185323c45e3d562d24835e99803", "text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.", "title": "" }, { "docid": "8bf654627641ae874c6c864dcf50be2c", "text": "Social media provide a low-cost alternative source for public health surveillance and health-related classification plays an important role to identify useful information. In this paper, we summarized the recent classification methods using social media in public health. These methods rely on bag-of-words (BOW) model and have difficulty grasping the semantic meaning of texts. Unlike these methods, we present a word embedding based clustering method. Word embedding is one of the strongest trends in Natural Language Processing (NLP) at this moment. It learns the optimal vectors from surrounding words and the vectors can represent the semantic information of words. A tweet can be represented as a few vectors and divided into clusters of similar words. According to similarity measures of all the clusters, the tweet can then be classified as related or unrelated to a topic (e.g., influenza). Our simulations show a good performance and the best accuracy achieved was 87.1%. Moreover, the proposed method is unsupervised. It does not require labor to label training data and can be readily extended to other classification problems or other diseases.", "title": "" }, { "docid": "39cd5de2f8370814e15cbfb264731334", "text": "Deep convolutional networks can achieve impressive results on RGB scene recognition thanks to large data sets such as places. In contrast, RGB-D scene recognition is still underdeveloped in comparison, due to two limitations of RGB-D data we address in this paper. The first limitation is the lack of depth data for training deep learning models. Rather than fine tuning or transferring RGB-specific features, we address this limitation by proposing an architecture and a two-step training approach that directly learns effective depth-specific features using weak supervision via patches. The resulting RGB-D model also benefits from more complementary multimodal features. Another limitation is the short range of depth sensors (typically 0.5 m to 5.5 m), resulting in depth images not capturing distant objects in the scenes that RGB images can. We show that this limitation can be addressed by using RGB-D videos, where more comprehensive depth information is accumulated as the camera travels across the scenes. Focusing on this scenario, we introduce the ISIA RGB-D video data set to evaluate RGB-D scene recognition with videos. Our video recognition architecture combines convolutional and recurrent neural networks that are trained in three steps with increasingly complex data to learn effective features (i.e., patches, frames, and sequences). Our approach obtains the state-of-the-art performances on RGB-D image (NYUD2 and SUN RGB-D) and video (ISIA RGB-D) scene recognition.", "title": "" }, { "docid": "0f71e64aaf081b6624f442cb95b2220c", "text": "Objective\nElectronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training.\n\n\nMethods\nThe most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification.\n\n\nResults\nWe validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference.\n\n\nConclusion\nThe accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data.", "title": "" }, { "docid": "ee617dacdb47fd02a797f2968aaa784f", "text": "The Internet of Things (IoT) is defined as a paradigm in which objects equipped with sensors, actuators, and processors communicate with each other to serve a meaningful purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in this new emerging area. This survey paper proposes a novel taxonomy for IoT technologies, highlights some of the most important technologies, and profiles some applications that have the potential to make a striking difference in human life, especially for the differently abled and the elderly. As compared to similar survey papers in the area, this paper is far more comprehensive in its coverage and exhaustively covers most major technologies spanning from sensors to applications.", "title": "" }, { "docid": "1a3c57f03a2f19235ac4d4a4192e7026", "text": "Microwave technology plays a more important role in modern industrial sensing applications. Pushed by the significant progress in monolithic microwave integrated circuit technology over the past decades, complex sensing systems operating in the microwave and even millimeter-wave range are available for reasonable costs combined with exquisite performance. In the context of industrial sensing, this stimulates new approaches for metrology based on microwave technology. An old measurement principle nearly forgotten over the years has recently gained more and more attention in both academia and industry: the six-port interferometer. This paper reviews the basic concept, investigates promising applications in remote, as well as contact-based sensing and compares the system with state-of-the-art metrology. The significant advantages will be discussed just as the limitations of the six-port architecture. Particular attention will be paid to impairment effects and non-ideal behavior, as well as compensation and linearization concepts. It will be shown that in application fields, like remote distance sensing, precise alignment measurements, as well as interferometrically-evaluated mechanical strain analysis, the six-port architecture delivers extraordinary measurement results combined with high measurement data update rates for reasonable system costs. This makes the six-port architecture a promising candidate for industrial metrology.", "title": "" }, { "docid": "12ce2eef03ace3a51177a35473f935be", "text": "In this letter, a novel slot-coupling feeding technique has been adopted to realize a circularly polarized (CP) 2 × 2 microstrip array. Each array element is fed through two microstrip lines that are excited 90° out of phase (dual-feed technique) and coupled to a square patch by means of a square-ring slot realized in the feeding network ground plane. Design procedure, simulation results, and measurement data are presented for a 2 × 2 array working in the WiMax 3.3-3.8 GHz frequency band (14% percentage bandwidth). Due to both the symmetry properties of the novel slot-coupling feeding configuration and the implementation of a sequential rotation technique, excellent axial ratio (AR) performance is achieved in the WiMax band (AR < 1.35 dB at broadside) and for any direction in the antenna main beam (AR < 2.25 dB at 3.55 GHz). Actually, the 3-dB AR bandwidth is larger than the WiMax frequency band, as it goes up to about 30%.", "title": "" }, { "docid": "80352d036462d9a4989dea2adffd9d91", "text": "We present a technique for drawing ornamental designs consisting of placed instances of simple shapes. These shapes, which we call elements, are selected from a small library of templates. The elements are deformed to flow along a direction field interpolated from user-supplied strokes, giving a sense of visual flow to the final composition, and constrained to lie within a container region. Our implementation computes a vector field based on user strokes, constructs streamlines that conform to the vector field, and places an element over each streamline. An iterative refinement process then shifts and stretches the elements to improve the composition.", "title": "" }, { "docid": "5325778a57d0807e9b149108ea9e57d8", "text": "This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the \"MICCAI 2007 Grand Challenge\" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.", "title": "" }, { "docid": "5ef7a618db00daa44eb6596d65f29e67", "text": "Mobile phones are becoming de facto pervasive devices for people's daily use. This demonstration illustrates a new interaction, Tilt & Touch, to enable a smart phone to be a 3D controller. It exploits capacitive touchscreen and built-in MEMS motion sensors. When people want to navigate in a virtual reality environment on a large display, they can tilt the phone for viewpoint transforming, touch the phone screen for avatar moving, and pinch screen for viewing camera zooming. The virtual objects in the virtual reality environment can be rotated accordingly by tilting the phone.", "title": "" }, { "docid": "890abd5854f9d9d35b5d7d35ae95ae7a", "text": "CONTEXT\nAngiomyolipoma is a rare tumor characterized histologically by a mixture of spindle cells, adipose tissue, epithelioid cells, and vascular tissue. It usually involves the kidney followed by the liver whereby the majority of affected patients are female, and many cases arise in the setting of tuberous sclerosis.\n\n\nCASE REPORT\nWe report a case of a 33-year-old female with an asymptomatic incidental right renal mass suggestive of an angiomyolipoma in conjunction with numerous pancreatic masses.\n\n\nCONCLUSIONS\nThe utility of EUS in the differential diagnosis of pancreatic tumors is well established. This is the first known reported EUS detection and FNA confirmation of angiomyolipoma metastatic to the pancreas and should now be added to the already broad differential of metastatic pancreatic tumors.", "title": "" }, { "docid": "7d74b896764837904019a0abff967065", "text": "Asymptotic behavior of a recurrent neural network changes qualitatively at certain points in the parameter space, which are known as \\bifurcation points\". At bifurcation points, the output of a network can change discontinuously with the change of parameters and therefore convergence of gradient descent algorithms is not guaranteed. Furthermore, learning equations used for error gradient estimation can be unstable. However, some kinds of bifurcations are inevitable in training a recurrent network as an automaton or an oscillator. Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations.", "title": "" }, { "docid": "1e6c497fe53f8cba76bd8b432c618c1f", "text": "inputs into digital (down or up), analog (-1.0 to 1.0), and positional (touch and • mouse cursor). By building on a solid main loop you can easily add support for detecting chorded inputs and sequence inputs.", "title": "" }, { "docid": "748d71e6832288cd0120400d6069bf50", "text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull", "title": "" }, { "docid": "ffafffd33a69dbf4f04f6f7b67b3b56b", "text": "Significant advances have been made in Natural Language Processing (NLP) mod1 elling since the beginning of 2018. The new approaches allow for accurate results, 2 even when there is little labelled data, because these NLP models can benefit from 3 training on both task-agnostic and task-specific unlabelled data. However, these 4 advantages come with significant size and computational costs. 5 This workshop paper outlines how our proposed convolutional student architec6 ture, having been trained by a distillation process from a large-scale model, can 7 achieve 300× inference speedup and 39× reduction in parameter count. In some 8 cases, the student model performance surpasses its teacher on the studied tasks. 9", "title": "" }, { "docid": "262f1e965b311bf866ef5b924b6085a7", "text": "By considering the amount of uncertainty perceived and the willingness to bear uncertainty concomitantly, we provide a more complete conceptual model of entrepreneurial action that allows for examination of entrepreneurial action at the individual level of analysis while remaining consistent with a rich legacy of system-level theories of the entrepreneur. Our model not only exposes limitations of existing theories of entrepreneurial action but also contributes to a deeper understanding of important conceptual issues, such as the nature of opportunity and the potential for philosophical reconciliation among entrepreneurship scholars.", "title": "" }, { "docid": "7c52c82bdf1e86e27f52035d4cdbda45", "text": "This paper addresses the question of how much a previously obtained map of a road environment should be trusted for vehicle localisation during autonomous driving by assessing the probability that roadworks are being traversed. We compare two formulations of a roadwork prior: one based on Gaussian Process (GP) classification and the other on a more conventional Hidden Markov Model (HMM) in order to model correlations between nearby parts of a vehicle trajectory. Importantly, our formulation allows this prior to be updated efficiently and repeatedly to gain an ever more accurate model of the environment over time. In the absence of, or in addition to, any in-situ observations, information from dedicated web resources can readily be incorporated into the framework. We evaluate our model using real data from an autonomous car and show that although the GP and HMM are roughly commensurate in terms of mapping roadworks, the GP provides a more powerful representation and lower prediction error.", "title": "" } ]
scidocsrr
7ee7ccceebc4168d6a96b7187e38edae
Bedtime procrastination: introducing a new area of procrastination
[ { "docid": "eb10f86262180b122d261f5acbe4ce18", "text": "Procrasttnatton ts variously descnbed a? harmful, tnnocuous, or even beneficial Two longitudinal studies examined procrastination among students Procrasttnators reported lower stress and less illness than nonprocrasttnators early in the semester, but they reported higher stress and more illness late in the term, and overall they were sicker Procrastinators also received lower grades on atl assignment's Procrasttnatton thus appears to be a self-defeating behavior pattem marked by short-term benefits and long-term costs Doing one's work and fulfilling other obligations in a timely fashion seem like integral parts of rational, proper adult funcuoning Yet a majonty of the population admits to procrastinating at least sometimes, and substantial minonties admit to significant personal, occupational, or financial difficulties resulting from their dilatory behavior (Ferran, Johnson, & McCown, 1995) Procrastinauon is often condemned, particularly by people who do not think themselves guilty of it (Burka & Yuen, 1983, Ferran et dl, 1995) Cntics of procrastination depict it as a lazy self-indulgent habit of putting things off for no reason They say it is self-defeating m that It lowers the quality of performance, because one ends up with less time to work (Baumeister & Scher, 1988, Ellis & Knaus, 1977) Others depict it as a destructive strategy of self-handicappmg (Jones & Berglas, 1978), such a,s when people postpone or withhold effort so as to give themselves an excuse for anticipated poor performance (Tice, 1991, Tice & Baumeister, 1990) People who finish their tasks and assignments early may point self-nghteously to the stress suffered by procrastinators at the last minute and say that putting things off is bad for one's physical or mental health (see Boice, 1989, 1996, Rothblum, Solomon, & Murakami, 1986 Solomon & Rothblum, 1984) On the other hand, some procrastinators defend their practice They point out correctly that if one puts in the same amount of work on the project, it does not matter whether this is done early or late Some even say that procrastination improves perfonnance, because the imminent deadline creates excitement and pressure that elicit peak performance \"I do my best work under pressure,\" in the standard phrase (Ferran, 1992, Ferran et al , 1995, Uy, 1995) Even if it were true that stress and illness are higher for people who leave things unul the last minute—and research has not yet provided clear evidence that in fact they both are higher—this might be offset by the enjoyment of carefree times earlier (see Ainslie, 1992) The present investigation involved a longitudinal study of the effects of procrastination on quality of performance, stress, and illness Early in the semester, students were given an assignment with a deadline Procrastinators were identified usmg Lay's (1986) scale Students' well-being was assessed with self-reports of stress and illAddress correspondence Case Western Reserve Unive 7123, e-mail dxt2@po cwiu o Dianne M Tice Department of Psychology, sity 10900 Euclid Ave Cleveland OH 44106ness The validity of the scale was checked by ascertaining whethtr students tumed in the assignment early, on time, or late Finally, task performance was assessed by consulting the grades received Competing predictions could be made", "title": "" }, { "docid": "9c0baef3b1d0c0f13b87a2dbeb4769f9", "text": "In a longitudinal study of 140 eighth-grade students, self-discipline measured by self-report, parent report, teacher report, and monetary choice questionnaires in the fall predicted final grades, school attendance, standardized achievement-test scores, and selection into a competitive high school program the following spring. In a replication with 164 eighth graders, a behavioral delay-of-gratification task, a questionnaire on study habits, and a group-administered IQ test were added. Self-discipline measured in the fall accounted for more than twice as much variance as IQ in final grades, high school selection, school attendance, hours spent doing homework, hours spent watching television (inversely), and the time of day students began their homework. The effect of self-discipline on final grades held even when controlling for first-marking-period grades, achievement-test scores, and measured IQ. These findings suggest a major reason for students falling short of their intellectual potential: their failure to exercise self-discipline.", "title": "" }, { "docid": "6200e3a50d2e578d56ef9015149dd5fb", "text": "This study investigated the frequency of college students' procrastination on academic tasks and the reasons for procrastination behavior. A high percentage of students reported problems with procrastination on several specific academic tasks. Self-reported procrastination was positively correlated with the number of self-paced quizzes students took late in the semester and with participation in an experimental session offered late in the semester. A factor analysis of the reasons for procrastination indicated that the factors Fear of Failure and Aversiveness of the Task accounted for most of the variance. A small but very homogeneous group of subjects endorsed items on the Fear of Failure factor that correlated significantly with self-report measures of depression, irrational cognitions, low self-esteem, delayed study behavior, anxiety, and lack of assertion. A larger and relatively heterogeneous group of subjects reported procrastinating as a result of aversiveness of the task. The Aversiveness of the Task factor did not correlate significantly with anxiety or assertion, but it did correlate significantly with'depression, irrational cognitions, low self-esteem, and delayed study behavior. These results indicate that procrastination is not solely a deficit in study habits or time management, but involves a complex interaction of behavioral, cognitive, and affective components;", "title": "" } ]
[ { "docid": "db0bb2489a29f23fb49cec395ee7dfa8", "text": "Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75% and 95% for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real world grasping. This paper proposes a number of innovations that together result in a significant improvement in grasp detection performance. The specific improvement in performance due to each of our contributions is quantitatively measured either in simulation or on robotic hardware. Ultimately, we report a series of robotic experiments that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter.", "title": "" }, { "docid": "117c66505964344d9c350a4e57a4a936", "text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.", "title": "" }, { "docid": "0fa223f3e555cbea206640de7f699cf8", "text": "Transforming unstructured text into structured form is important for fashion e-commerce platforms that ingest tens of thousands of fashion products every day. While most of the e-commerce product extraction research focuses on extracting a single product from the product title using known keywords, little attention has been paid to discovering potentially multiple products present in the listing along with their respective relevant attributes, and leveraging the entire title and description text for this purpose. We fill this gap and propose a novel composition of sequence labeling and multi-task learning as an end-to-end trainable deep neural architecture. We systematically evaluate our approach on one of the largest tagged datasets in fashion e-commerce consisting of 25K listings labeled at word-level. Given 23 labels, we discover label-values with F1 score of 92.2%. When applied to 2M listings, we discovered 2.6M fashion items and 9.5M attribute values.", "title": "" }, { "docid": "09deba1b4b2dd95b821a4f5de68c7f7b", "text": "BACKGROUND\nStudies have shown that a significant proportion of people with epilepsy use complementary and alternative medicine (CAM). CAM use is known to vary between different ethnic groups and cultural contexts; however, little attention has been devoted to inter-ethnic differences within the UK population. We studied the use of biomedicine, complementary and alternative medicine, and ethnomedicine in a sample of people with epilepsy of South Asian origin living in the north of England.\n\n\nMETHODS\nInterviews were conducted with 30 people of South Asian origin and 16 carers drawn from a sampling frame of patients over 18 years old with epilepsy, compiled from epilepsy registers and hospital databases. All interviews were tape-recorded, translated if required and transcribed. A framework approach was adopted to analyse the data.\n\n\nRESULTS\nAll those interviewed were taking conventional anti-epileptic drugs. Most had also sought help from traditional South Asian practitioners, but only two people had tried conventional CAM. Decisions to consult a traditional healer were taken by families rather than by individuals with epilepsy. Those who made the decision to consult a traditional healer were usually older family members and their motivations and perceptions of safety and efficacy often differed from those of the recipients of the treatment. No-one had discussed the use of traditional therapies with their doctor. The patterns observed in the UK mirrored those reported among people with epilepsy in India and Pakistan.\n\n\nCONCLUSION\nThe health care-seeking behaviour of study participants, although mainly confined within the ethnomedicine sector, shared much in common with that of people who use global CAM. The appeal of traditional therapies lay in their religious and moral legitimacy within the South Asian community, especially to the older generation who were disproportionately influential in the determination of treatment choices. As a second generation made up of people of Pakistani origin born in the UK reach the age when they are the influential decision makers in their families, resort to traditional therapies may decline. People had long experience of navigating plural systems of health care and avoided potential conflict by maintaining strict separation between different sectors. Health care practitioners need to approach these issues with sensitivity and to regard traditional healers as potential allies, rather than competitors or quacks.", "title": "" }, { "docid": "62376954e4974ea2d52e96b373c67d8a", "text": "Imagine the following situation. You’re in your car, listening to the radio and suddenly you hear a song that catches your attention. It’s the best new song you have heard for a long time, but you missed the announcement and don’t recognize the artist. Still, you would like to know more about this music. What should you do? You could call the radio station, but that’s too cumbersome. Wouldn’t it be nice if you could push a few buttons on your mobile phone and a few seconds later the phone would respond with the name of the artist and the title of the music you’re listening to? Perhaps even sending an email to your default email address with some supplemental information. In this paper we present an audio fingerprinting system, which makes the above scenario possible. By using the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified. At the core of the presented system are a highly robust fingerprint extraction method and a very efficient fingerprint search strategy, which enables searching a large fingerprint database with only limited computing resources.", "title": "" }, { "docid": "f4dc67d810d5f104f91c8724630992cf", "text": "Apoptosis is deregulated in many cancers, making it difficult to kill tumours. Drugs that restore the normal apoptotic pathways have the potential for effectively treating cancers that depend on aberrations of the apoptotic pathway to stay alive. Apoptosis targets that are currently being explored for cancer drug discovery include the tumour-necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL) receptors, the BCL2 family of anti-apoptotic proteins, inhibitor of apoptosis (IAP) proteins and MDM2.", "title": "" }, { "docid": "9e3a4b9df7cdc331b7a6b7301d011f8d", "text": "Goals of parking lot management system include counting the number of parked vehicles, monitoring the changes of the parked vehicles over the time, and identifying the stalls available. To decrease the cost of the production, an integrated vision-based system is a good choice. In this paper, we propose a vision-based parking management system to manage an outdoor parking lot by four cameras set up at loft of buildings around it, sending information, including real-time display, to database of ITS center via internet. This system enables drivers to find parking spaces available or monitoring the parking lot where they parked their cars easily by wireless communication device. To increase accuracy, in the beginning, color manage is done to all input images, maintaining color consistency. Then, an adaptive parking lot background model is generated. The adequate color of each parking space is found out using statistical method in color image sequences captured by a camera, and foreground is extracted based on color information. The result will be further modified by shadow detection based on luminance analysis. Vision-based parking management system can manage large area by just several cameras. Adjusting position of the camera can easily make this system suitable for most cases. Besides, this system is endurable and is easy-installed because of its simple equipment.", "title": "" }, { "docid": "068df85fd09061ebcdd599974c865675", "text": "The use of RFID (radio-frequency identification) in the retail supply chain and at the point of sale (POS) holds much promise to revolutionize the process by which products pass from manufacturer to retailer to consumer. The basic idea of RFID is a tiny computer chip placed on pallets, cases, or items. The data on the chip can be read using a radio beam. RFID is a newer technology than bar codes, which are read using a laser beam. RFID is also more effective than bar codes at tracking moving objects in environments where bar code labels would be suboptimal or could not be used as no direct line of sight is available, or where information needs to be automatically updated. RFID is based on wireless (radio) systems, which allows for noncontact reading of data about products, places, times, or transactions, thereby giving retailers and manufacturers alike timely and accurate data about the flow of products through their factories, warehouses, and stores. Background", "title": "" }, { "docid": "aa5c22fa803a65f469236d2dbc5777a3", "text": "This article presents data on CVD and risk factors in Asian women. Data were obtained from available cohort studies and statistics for mortality from the World Health Organization. CVD is becoming an important public health problem among Asian women. There are high rates of CHD mortality in Indian and Central Asian women; rates are low in southeast and east Asia. Chinese and Indian women have very high rates and mortality from stroke; stroke is also high in central Asian and Japanese women. Hypertension and type 2 DM are as prevalent as in western women, but rates of obesity and smoking are less common. Lifestyle interventions aimed at prevention are needed in all areas.", "title": "" }, { "docid": "dabfd831ec8eaf37f662db3c75e68a5b", "text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to datadependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.", "title": "" }, { "docid": "c9e5b064a0c09300cdfaeba72898d11e", "text": "BACKGROUND\nInterrogative suggestibility and compliance are important psychological vulnerabilities during interrogation. The aim of the study was to investigate the relationship of suggestibility and compliance with childhood and current symptoms of attention deficit hyperactivity disorder (ADHD). Compliance has not been studied previously in relation to ADHD. A further aim was to investigate the relationship between ADHD and the reporting of having made a false confession to the police.\n\n\nMETHOD\nThe participants were 90 male prisoners, all of whom had completed the Gudjonsson Suggestibility and Compliance Scales (GSS and GCS) within 10 days of admission to the prison. Childhood ADHD symptoms were screened by the Wender Utah Rating Scale (WURS) and current adult symptoms by the DSM-IV Checklist criteria for ADHD.\n\n\nRESULTS\nHalf of the prisoners (50%) were found on screening to meet criteria for ADHD in childhood and, of those, over half (60%) were either fully symptomatic or in partial remission of their symptoms. ADHD symptoms were found to be significantly associated with compliance, but not with suggestibility. The relationship with compliance was stronger (effect size) in relation to current than childhood symptoms. The ADHD symptomatic groups were significantly more likely to claim that they had made a false confession to the police in the past.\n\n\nCONCLUSIONS\nThe findings raise important questions about the potential vulnerability of adults with ADHD symptoms in terms of their ability to cope with interrogation.", "title": "" }, { "docid": "bff4e56678cee249b43b519d0d28638c", "text": "The most successful exorcism of Maxwell’s demon is Smoluchowski’s 1912 observation that thermal fluctuations would likely disrupt the operation of any molecular-scale demonic machine. A later tradition sought to exorcise Maxwell’s demon by assessing the entropic cost of the demon’s processing of information. This later tradition fails since these same thermal fluctuations invalidate the molecular-scale manipulations upon which the thermodynamics of computation is based. A new argument concerning conservation of phase space volume shows that all Maxwell’s demons must fail.", "title": "" }, { "docid": "9b32c1ea81eb8d8eb3675c577cc0e2fc", "text": "Users' addiction to online social networks is discovered to be highly correlated with their social connections in the networks. Dense social connections can effectively help online social networks retain their active users and improve the social network services. Therefore, it is of great importance to make a good prediction of the social links among users. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously. Formally, the social networks which share a number of common users are defined as the \"aligned networks\".With the information transferred from multiple aligned social networks, we can gain a more comprehensive knowledge about the social preferences of users in the pre-specified target network, which will benefit the social link prediction task greatly. However, when transferring the knowledge from other aligned source networks to the target network, there usually exists a shift in information distribution between different networks, namely domain difference. In this paper, we study the social link prediction problem of the target network, which is aligned with multiple social networks concurrently. To accommodate the domain difference issue, we project the features extracted for links from different aligned networks into a shared lower-dimensional feature space. Moreover, users in social networks usually tend to form communities and would only connect to a small number of users. Thus, the target network structure has both the low-rank and sparse properties. We propose a novel optimization framework, SLAMPRED, to combine both these two properties aforementioned of the target network and the information of multiple aligned networks with nice domain adaptations. Since the objective function is a linear combination of convex and concave functions involving nondifferentiable regularizers, we propose a novel optimization method to iteratively solve it. Extensive experiments have been done on real-world aligned social networks, and the experimental results demonstrate the effectiveness of the proposed model.", "title": "" }, { "docid": "df404258bca8d16cabf935fd94fc7463", "text": "Training deep neural networks with Stochastic Gradient Descent, or its variants, requires careful choice of both learning rate and batch size. While smaller batch sizes generally converge in fewer training epochs, larger batch sizes offer more parallelism and hence better computational efficiency. We have developed a new training approach that, rather than statically choosing a single batch size for all epochs, adaptively increases the batch size during the training process. Our method delivers the convergence rate of small batch sizes while achieving performance similar to large batch sizes. We analyse our approach using the standard AlexNet, ResNet, and VGG networks operating on the popular CIFAR10, CIFAR-100, and ImageNet datasets. Our results demonstrate that learning with adaptive batch sizes can improve performance by factors of up to 6.25 on 4 NVIDIA Tesla P100 GPUs while changing accuracy by less than 1% relative to training with fixed batch sizes.", "title": "" }, { "docid": "1dff8a1fae840411defec05db479040c", "text": "This paper investigates the use of n-tuple systems as position value functions for the game of Othello. The architecture is described, and then evaluated for use with temporal difference learning. Performance is compared with prev iously developed weighted piece counters and multi-layer perceptrons. The n-tuple system is able to defeat the best performing of these after just five hundred ga mes of selfplay learning. The conclusion is that n-tuple networks learn faster and better than the other more conventional approaches.", "title": "" }, { "docid": "538047fc099d0062ab100343b26f5cb7", "text": "AIM\nTo examine the evidence on the association between cannabis and depression and evaluate competing explanations of the association.\n\n\nMETHODS\nA search of Medline, Psychinfo and EMBASE databases was conducted. All references in which the terms 'cannabis', 'marijuana' or 'cannabinoid', and in which the words 'depression/depressive disorder/depressed', 'mood', 'mood disorder' or 'dysthymia' were collected. Only research studies were reviewed. Case reports are not discussed.\n\n\nRESULTS\nThere was a modest association between heavy or problematic cannabis use and depression in cohort studies and well-designed cross-sectional studies in the general population. Little evidence was found for an association between depression and infrequent cannabis use. A number of studies found a modest association between early-onset, regular cannabis use and later depression, which persisted after controlling for potential confounding variables. There was little evidence of an increased risk of later cannabis use among people with depression and hence little support for the self-medication hypothesis. There have been a limited number of studies that have controlled for potential confounding variables in the association between heavy cannabis use and depression. These have found that the risk is much reduced by statistical control but a modest relationship remains.\n\n\nCONCLUSIONS\nHeavy cannabis use and depression are associated and evidence from longitudinal studies suggests that heavy cannabis use may increase depressive symptoms among some users. It is still too early, however, to rule out the hypothesis that the association is due to common social, family and contextual factors that increase risks of both heavy cannabis use and depression. Longitudinal studies and studies of twins discordant for heavy cannabis use and depression are needed to rule out common causes. If the relationship is causal, then on current patterns of cannabis use in the most developed societies cannabis use makes, at most, a modest contribution to the population prevalence of depression.", "title": "" }, { "docid": "8f0ac7417daf0c995263274738dcbb13", "text": "Technology platform strategies offer a novel way to orchestrate a rich portfolio of contributions made by the many independent actors who form an ecosystem of heterogeneous complementors around a stable platform core. This form of organising has been successfully used in the smartphone, gaming, commercial software, and other industrial sectors. While technology ecosystems require stability and homogeneity to leverage common investments in standard components, they also need variability and heterogeneity to meet evolving market demand. Although the required balance between stability and evolvability in the ecosystem has been addressed conceptually in the literature, we have less understanding of its underlying mechanics or appropriate governance. Through an extensive case study of a business software ecosystem consisting of a major multinational manufacturer of enterprise resource planning (ERP) software at the core, and a heterogeneous system of independent implementation partners and solution developers on the periphery, our research identifies three salient tensions that characterize the ecosystem: standard-variety; control-autonomy; and collective-individual. We then highlight the specific ecosystem governance mechanisms designed to simultaneously manage desirable and undesirable variance across each tension. Paradoxical tensions may manifest as dualisms, where actors are faced with contradictory and disabling „either/or‟ decisions. Alternatively, they may manifest as dualities, where tensions are framed as complementary and mutually-enabling. We identify conditions where latent, mutually enabling tensions become manifest as salient, disabling tensions. By identifying conditions in which complementary logics are overshadowed by contradictory logics, our study further contributes to the understanding of the dynamics of technology ecosystems, as well as the effective design of technology ecosystem governance that can explicitly embrace paradoxical tensions towards generative outcomes.", "title": "" }, { "docid": "c5a15fd3102115aebc940cbc4ce5e474", "text": "We present a novel approach for visual detection and attribute-based search of vehicles in crowded surveillance scenes. Large-scale processing is addressed along two dimensions: 1) large-scale indexing, where hundreds of billions of events need to be archived per month to enable effective search and 2) learning vehicle detectors with large-scale feature selection, using a feature pool containing millions of feature descriptors. Our method for vehicle detection also explicitly models occlusions and multiple vehicle types (e.g., buses, trucks, SUVs, cars), while requiring very few manual labeling. It runs quite efficiently at an average of 66 Hz on a conventional laptop computer. Once a vehicle is detected and tracked over the video, fine-grained attributes are extracted and ingested into a database to allow future search queries such as “Show me all blue trucks larger than 7 ft. length traveling at high speed northbound last Saturday, from 2 pm to 5 pm”. We perform a comprehensive quantitative analysis to validate our approach, showing its usefulness in realistic urban surveillance settings.", "title": "" }, { "docid": "8e09b4718b472dbb7df2bc4ab8d8750a", "text": "In this article, we propose an access control mechanism for Web-based social networks, which adopts a rule-based approach for specifying access policies on the resources owned by network participants, and where authorized users are denoted in terms of the type, depth, and trust level of the relationships existing between nodes in the network. Different from traditional access control systems, our mechanism makes use of a semidecentralized architecture, where access control enforcement is carried out client-side. Access to a resource is granted when the requestor is able to demonstrate being authorized to do that by providing a proof. In the article, besides illustrating the main notions on which our access control model relies, we present all the protocols underlying our system and a performance study of the implemented prototype.", "title": "" }, { "docid": "c4f775420405ef9e2e69c37ecaa172c8", "text": "Battery-charging algorithms can be used for either single- or multiple-battery chemistries. In general, single-chemistry chargers have the advantages of simplicity and reliability. On the other hand, multichemistry chargers, or “universal battery chargers,” provide a practical option for multichemistry battery systems, particularly for portable appliances, but they have some limitations. This paper presents a review of some charging algorithms for major batteries, i.e., nickel-cadmium, nickel-metal-hydride, and lithium-ion batteries for single- and multiple-chemistry chargers. A comparison between these algorithms in terms of their charging schemes and charge termination techniques is included. In addition, some trends of recent chargers development are presented.", "title": "" } ]
scidocsrr